ѻý Mexico /mx-es/ ѻý Mon, 14 Apr 2025 12:45:29 +0000 es-MX hourly 1 https://wordpress.org/?v=6.7.2 /mx-es/wp-content/uploads/sites/28/2021/07/cropped-favicon.png?w=32 ѻý Mexico /mx-es/ 32 32 192805558 Data fabric: Unlocking all of data’s superpowers /mx-es/insights/expert-perspectives/data-fabric-unlocking-all-of-datas-superpowers/ Mon, 14 Apr 2025 12:44:56 +0000 /mx-es/?p=545170&preview=true&preview_id=545170 The post Data fabric: Unlocking all of data’s superpowers appeared first on ѻý Mexico.

]]>

Data fabric: Unlocking all of data’s superpowers

Sjoukje Zaal
February 19, 2025

Data fabric helps organizations overcome the challenge of managing overwhelming data by unifying and streamlining disparate sources. It connects and integrates data across environments, transforming complexity into actionable insights that drive innovation.

Utilizing data is not a passing trend; it’s a core element of business strategy. However, despite its potential, many organizations struggle with turning overwhelming amounts of data into valuable insights. This is where data fabric comes in, offering a solution to unify and streamline disparate data sources. By connecting and integrating data from various environments, data fabric provides a clear path from complexity to actionable insights that drive innovation and improve competitive advantage.

The data challenge: Why data fabric matters

Businesses face the dual challenge of becoming data-driven while navigating an increasingly complex landscape. From data silos to a lack of coordination between business and IT, and growing concerns about privacy and governance, the hurdles are many. But amidst all these challenges, one thing remains clear: data is a key asset in today’s digital world. The real question isn’t whether data is valuable; it is instead how organizations can leverage it effectively. In a fast-paced, competitive environment, businesses that can unlock the value of their data will make more informed decisions, optimize operations, and stay ahead of competitors. This is where data fabric proves its worth, making it easier to integrate, analyze, and secure data, driving both innovation and business success.

The need for data fabric: Bringing order to chaos

A data fabric acts as a unified platform that connects and manages data from various sources – whether on-premises, in the cloud, or in a hybrid setup. It integrates structured, semi-structured, and unstructured data, creating a cohesive environment where all data types work together seamlessly. In other words, data fabric eliminates the silos that stifle innovation and ensures that data is always ready to provide real-time insights.

Today’s data environments are anything but simple. The idea of funneling all data into one central repository doesn’t hold up in a world of distributed systems and multi-cloud ecosystems. Data fabric allows organizations to keep their data where it is while still making it accessible and manageable, so it’s easier to extract the insights that power decision-making and business growth.

Foundation layer: The role of metadata 

The foundation of data fabric is metadata: the data about your data. In a world where data exists in multiple locations, formats, and systems, metadata helps connect the dots. Rather than moving data around unnecessarily, metadata enables data virtualization, allowing real-time access without physically transferring data. This approach streamlines data discovery, integration, and governance, ensuring that the right data is accessible at the right time. This metadata layer is also what makes data fabric “smart.” It doesn’t just connect data; it helps organize it in a way that makes sense. By automating processes like data discovery and governance, metadata enables organizations to manage complex data environments more efficiently while maintaining control over quality and security.

Composable data products: Innovation at speed

One of the most powerful features of data fabric is the ability to create composable data products. These are reusable, modular datasets or services that can be combined in various ways to create new capabilities or services. Instead of reinventing the wheel every time a new business need arises, organizations can use existing data products to accelerate innovation.

This modular approach allows businesses to rapidly respond to market demands, creating new offerings or features without starting from scratch. It’s about leveraging what’s already available, making it easier and faster to innovate while maintaining flexibility.

Data democratization and self-service: Empowering teams to innovate

Traditionally, data has been controlled by IT departments, slowing down decision-making and limiting innovation. With data fabric, that changes. Data democratization means giving everyone in the organization the ability to access and work with data, not just the IT team. This shift empowers teams to experiment, collaborate, and iterate faster, without waiting for IT to process every request.

By enabling self-service analytics and empowering teams to create their own data products, organizations can speed up innovation and improve collaboration. Developers, data scientists, business analysts – everyone gets the tools to explore data and generate insights in ways that drive the business forward. In short, data fabric fosters a more agile and responsive organization.

AI integration: Powering smarter insights 

AI relies on high-quality, structured data to provide valuable insights. Fortunately, data fabric is designed to work seamlessly with various data structures – whether it’s tables, graphs, or lists – ensuring that AI and machine learning models have access to the rich, reliable datasets they need to make accurate predictions.

With strong governance and data lineage, data fabric provides the foundation for AI and machine learning models to thrive. This enables innovations in areas like predictive analytics, personalized recommendations, and automation, while ensuring the integrity and security of the data being used.

Data fabric also enhances the AI process by automating many of the routine tasks that traditionally take up valuable time. Tasks like data integration, quality management, and anomaly detection can be automated using AI, freeing up organizations to focus on more high-value innovations that drive business growth.

“By enabling self-service analytics and empowering teams to create their own data products, organizations can speed up innovation and improve collaboration.”

The transformative power of data fabric 

Data fabric isn’t just a technological solution – it’s a game-changer for how organizations manage their data. By leveraging metadata and AI-driven solutions, data fabric helps organizations create a flexible, responsive, and innovative data environment. This environment fosters faster insights, quicker development cycles, and the ability to respond to market changes with agility.

Perhaps most importantly, data democratization enables a culture of innovation where employees across departments can contribute to business success. As data volumes grow and complexities increase, data fabric will be the key to not only managing these challenges but turning them into new opportunities for growth.

Data fabric is an essential solution for organizations looking to stay competitive and innovative in a world where data is becoming increasingly complex. By integrating AI, automating routine tasks, and empowering teams to access and use data freely, you’ll position your organization for success in the digital economy.

Start innovating now –

Build a unified data fabric

Begin by implementing a data fabric that integrates and manages data across all environments: cloud, on-premises, and hybrid. Real-time access and seamless connectivity eliminate silos, unlocking new possibilities for faster insights and product development. 

Create reusable data products

Transform your data into modular, reusable products. This approach accelerates innovation and enables faster iteration, so you can create new services and capabilities without starting from scratch every time. 

Empower teams with data

Democratize your data by making it accessible to everyone. Self-service capabilities allow teams to experiment and innovate quickly, fostering a culture of continuous, business-driven innovation.

Interesting read?

ѻý’s Innovation publication, Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from ѻý, with a special mention of our external contributors from, and . Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

Meet the authors

Sjoukje Zaal

CTO Microsoft

    The post Data fabric: Unlocking all of data’s superpowers appeared first on ѻý Mexico.

    ]]>
    545170
    Mulder and Scully for fraud prevention: Teaming up AI capabilities /mx-es/insights/expert-perspectives/mulder-and-scully-for-fraud-prevention-teaming-up-ai-capabilities/ Mon, 14 Apr 2025 12:41:18 +0000 /mx-es/?p=545167&preview=true&preview_id=545167 The post Mulder and Scully for fraud prevention: Teaming up AI capabilities appeared first on ѻý Mexico.

    ]]>

    Mulder and Scully for fraud prevention:
    Teaming up AI capabilities

    Joakim Nilsson
    March 5, 2025

    While Mulder trusts his gut; Scully trusts the facts – in fraud detection, we need both. Hybrid AI blends the intuition of LLM with the structured knowledge of a knowledge graph, letting agents uncover hidden patterns in real time. The truth is out there—now we have the tools to find it.

    Fraud detection can be revolutionized with hybrid AI. Combining the “intuitive hunches” from LLMs with a fraud-focused knowledge graph, a multi-agent system can identify weak signals and evolving fraud patterns, moving from detection to prevention in real-time. The challenge? Rule sets need to be cast in iron, whereas the system itself must be like water: resilient and adaptive. Historically, this conflict has been unsolvable. But that is about to change.

    A multi-agent setup

    Large language models (LLMs) are often criticized for hallucinating: coming up with results that seem feasible but are plain wrong. In this case though, we embrace the LLM’s gut-feeling-based approach and exploit its capabilities to identify potential signs of fraud. These “hunches” are mapped onto a general ontology and thus made available to symbolic AI components that build on logic and rules. So, rather than constricting the LLM, we are relying on its language capabilities to spot subtle clues in text. Should we act directly on these hunches, we would run into a whole world of problems derived from the inherent unreliability of LLMs. However, this is the task of a highly specialized team of agents, and there are other agents standing by, ready to make sense of the data and establish reliable patterns.

    When we talk about agents, we refer to any entity that acts on behalf of another to accomplish high-level objectives using specialized capabilities. They may differ in degree of autonomy and authority to take actions that can impact their environment. Agents do not necessarily use AI: many non-AI systems are agents, too. (A traditional thermostat is a simple non-AI agent.) Similarly, not all AI systems are agents. In this context, the agents we focus on primarily handle data, following predefined instructions and using specific tools to achieve their tasks.

    We define a multi-agent system as being made up of multiple independent agents. Every agent runs on its own, processing its own data and making decisions, yet staying in sync with the others through constant communication. In a homogeneous system, all agents are the same and their complex behavior solves the problem (as in a swarm). Heterogeneous systems, though, deploy different agents with different capabilities. Systems that use agents (either single or multiple) are sometimes called “agentic” architectures or frameworks.

    For example, specialized agents can dive into a knowledge graph, dig up specific information, spot patterns, and update nodes or relationships based on new findings. The result? A more dynamic, contextually rich knowledge graph that evolves as the agents learn and adapt.

    The power is in the teaming. Think of the agents Mulder and Scully from The X-Files television show: Mulder represents intuitive, open-minded thinking, while Scully embodies rational analysis. In software, there always have been many Scullys but, with LLMs, we now have Mulders too. The challenge, as in The X-Files, is in making them work together effectively.

    The role of a universal ontology

    We employ a universal ontology to act as a shared language or, perhaps a better analogy, a translation exchange, ensuring that both intuitive and analytical agents communicate in terms that can be universally understood. This ontology primarily consists of “flags” –generic indicators associated with potential fraud risks. These flags are intentionally defined broadly, capturing a wide range of behaviors or activities that could hint at fraudulent actions without constraining the agents to specific cases.

    The key to this system lies not in isolating a single flag but in identifying meaningful combinations. A single instance of a flag may not signify fraud; however, when several flags emerge together, they provide a more compelling picture of potential risk.

    “This innovation shifts the approach from simple fraud detection to proactive prevention, allowing authorities to stay ahead of fraudsters with scalable systems that learn and evolve.”

    Hybrid AI adaptability

    The adaptability of the system lies in the bridging between neural and symbolic AI as the LLM distills nuances in texts into hunches. They need to be structured and amplified for our analytical AI to be able to access them. As Igor Stravinsky wrote in his 1970 book Poetics of Music in the Form of Six Lessons, “Thus what concerns us here is not imagination itself, but rather creative imagination: the faculty that helps us pass from the level of conception to the level of realization.” For us, that faculty is the combination of a general ontology and vector-based similarity search. They allow us to connect hunches to flags based on semantic matching and thus address the data using general rules. Because we work in a graph context, we can also explore direct, indirect, and even implicit relations between the data.

    Now let’s explore how our team of agents picks up and amplifies weak signals, and how these signals, once interwoven in the graph, can lead the system to identify patterns spanning time and space, patterns it was not designed to identify.

    A scenario: Welfare agencies have observed a rise in fraudulent behavior, often uncovered only after individuals are exposed for other reasons like media reports. Identifying these fraud attempts earlier, ideally at the application stage, would be extremely important.

    Outcome: By combining intuitive and analytical insights, authorities uncover a well-coordinated fraud ring that would be hard to detect through traditional methods. The agents map amplified weak signals as well as explicit and implicit connections. Note also that the system was not trained on detecting this pattern; it emerged thanks to the weak signal amplification.

    One of the powers of hybrid AI lies in its ability to amplify weak signals and adapt in real time, uncovering hidden fraud patterns that traditional methods often miss. By blending the intuitive insights of LLMs with the analytical strength of knowledge graphs and multi-agent systems, we’re entering a new era of fraud detection and prevention – one that’s smarter, faster, and more effective. As Mulder might say, the truth is out there, and with the right team, we’re finally close to finding it.

    Start innovating now –

    Implement a universal ontology

    Create a shared ontology to bridge neural (intuitive) and symbolic (analytical) AI agents, transforming weak signals for deeper analysis by expert systems and graph-based connections.

    Form specialized multi-agent teams

    Build teams of neural (real-time detection) and symbolic (rule-based analysis) AI agents, each specialized with tools for their role.

    Leverage graph technology for cross-referencing

    Use graph databases to link signals over time and across data sources, uncovering patterns like fraud faster, earlier, and at a lower cost than current methods.

    Interesting read?

    ѻý’s Innovation publication, Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from ѻý, with a special mention of our external contributors from, and . Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

    Meet the authors

    Joakim Nilsson

    Knowledge Graph Lead, ѻý & Data Sweden, ѻý
    Based in Malmö Sweden, Joakim is part of the CTO office where he drives the expansion of Knowledge Graphs forward in the region. He has been involved in Knowledge Graph projects as a consultant both for ѻý and Neo4j. Joakim holds a master’s degree in mathematics and has been working with Knowledge Graphs since 2021.

      The post Mulder and Scully for fraud prevention: Teaming up AI capabilities appeared first on ѻý Mexico.

      ]]>
      545167
      The grade-AI generation: Revolutionizing education with generative AI /mx-es/insights/expert-perspectives/the-grade-ai-generation-revolutionizing-education-with-generative-ai/ Mon, 14 Apr 2025 12:35:42 +0000 /mx-es/?p=545160&preview=true&preview_id=545160 The post The grade-AI generation: Revolutionizing education with generative AI appeared first on ѻý Mexico.

      ]]>

      The grade-AI generation:
      Revolutionizing education with generative AI

      Dr. Daniel Kühlwein
      March 19, 2025

      Our Global Data Science Challenge is shaping the future of learning. In an era when AI is reshaping industries, ѻý’s 7th Global Data Science Challenge (GDSC) tackled education.

      By harnessing cutting-edge AI and advanced data analysis techniques, participants, from seasoned professionals to aspiring data scientists, are building tools to empower educators and policy makers worldwide to improve teaching and learning.

      The rapidly evolving landscape of artificial intelligence presents a crucial question: how can we leverage its power to solve real life challenges? ѻý’s Global Data Science Challenge (GDSC) has been answering this question for years and, in 2024, it took on its most significant mission yet – revolutionizing education through smarter decision making.

      The need for innovation in education is undeniable. Understanding which learners are making progress, which are not, and why is critically important for education leaders and policy makers to prioritize the interventions and education policies effectively. According to UNESCO, a staggering 251 million children worldwide remain out of school. Among those who do attend, the average annual improvement in reading proficiency at the end of primary education is alarmingly slow—just 0.4 percentage points per year. This presents a sheer challenge in global foundational learning hampering efforts made to achieve the learning goal as set forth in the Sustainable Development Agenda.

      The grade-AI generation: A collaborative effort

      The GDSC 2024, aptly named “The Grade-AI Generation,” brought together a powerful consortium. ѻý offered its data science expertise, UNESCO contributed its deep understanding of global educational challenges, and Amazon Web Services (AWS) provided access to cutting-edge AI technologies. This collaboration unlocks the hidden potential within vast learning assessment datasets, transforming raw data into actionable insights for decision making that could change the future of millions of children worldwide.

      At the heart of this year’s challenge lies the PIRLS 2021 dataset – a comprehensive global survey encompassing over 30 million data points on 4th grade children’s reading achievement. This dataset is particularly valuable because it provides a rich and standardized data that allows participants to identify patterns and trends across different regions and education systems. By analyzing factors like student performance, demographics, instructional approaches, curriculum, home environment, etc. the AI-powered education policy expert can offer insights that would take much longer time and resources to gain from traditional methods. Participants were tasked with creating an AI-powered education policy expert capable of analyzing this rich data and providing data-driven advice to policymakers, education leaders, teachers, but also parents, and students themselves.

      Building the future: Agentic AI systems

      The challenge leveraged state-of-the-art AI technologies, particularly focusing on agentic systems built with advanced Large Language Models (LLMs) such as Claude, Llama, and Mistral. These systems represent a significant leap forward in AI capabilities, enabling more nuanced understanding and analysis of complex educational data.

      “Generative AI is the most revolutionary technology of our time,” says Mike Miller, Senior Principal Product Lead at AWS, “enabling us to leverage these massive amounts of complicated data to capture for analysis, and present knowledge in more advanced ways. It’s a game-changer and it will help make education more effective around the world and enable our global community to commit to more sustainable development.“

      The transformative potential of AI in education

      The potential impact of this challenge extends far beyond the competition itself. As Gwang-Chol Chang, Chief, Section of Education Policy at UNESCO, explains, “Such innovative technology is exactly what this hackathon has accomplished. Not just only do we see the hope for lifting the reading level of young children around the world, we also see a great potential for a breakthrough in education policy and practice.”

      The GDSC has a proven track record of producing innovations with real-world impact. In the 2023 edition, “The Biodiversity Buzz,” participants developed a new state-of-the-art model for insect classification. Even more impressively, the winning model from the 2020 challenge, “Saving Sperm Whale Lives,” is now being used in the world’s largest public whale-watching site, happywhale.com, demonstrating the tangible outcomes these challenges can produce. 

      Aligning with a global goal

      This year’s challenge aligns perfectly with ѻý’s belief that data and AI can be a force for good. It embodies the company’s mission to help clients “get the future you want” by applying cutting-edge technology to solve pressing global issues.

      Beyond the competition: A catalyst for change

      The GDSC 2024 is more than just a competition; it’s a global collaboration that brings together diverse talents to tackle one of the world’s most critical challenges. By bridging the gap between complex, costly collected learning assessment data and actionable insights, participants have the opportunity to make a lasting impact on global education.

      A glimpse into the future

      The winning team ‘insAIghtED’ consists of Michal Milkowski, Serhii Zelenyi, Jakub Malenczuk, and Jan Siemieniec, based in Warsaw Poland. They developed an innovative solution aimed at enhancing actionable insights using advanced AI agents. Their model leverages the PIRLS 2021 dataset, which provides structured, sample-based data on reading abilities among 4th graders globally. However, recognizing the limitations of relying solely on this dataset, the team expanded their model to incorporate additional data sources such as GDP, life expectancy, population statistics, and even YouTube content. This multi-agent AI system is designed to provide nuanced insights for educators and policymakers, offering short answers, data visualizations, yet elaborated explanations, and even a fun section to engage users.

      The architecture of their solution involves a lead data analyst, data engineer, chart preparer, and data scientist, each contributing to different aspects of the model’s functionality. The system is capable of querying databases, aggregating data, performing internet searches, and preparing elaborated answers. By integrating various data sources and employing state-of-the-art AI technologies like Langchain and crewAI, the insAIghtED model delivers impactful, real-world, actionable insights that go beyond the numbers, helping to address complex educational challenges and trends.

      Example:

      Figure 1: Show an example of the winning model. The image has the model answering the following prompt “Visualize the number of students who participated in the PIRLS 2021 study per country”

      As we stand on the brink of an AI-powered educational revolution, the Grade-AI Generation challenge serves as a beacon of innovation and hope. It showcases how the combination of data science, AI, and human creativity and passion can pave the way for a future where quality education is accessible to all, regardless of geographical or socioeconomic barriers.

      Start innovating now –

      Dive into AI for good
      Explore how AI can be applied to solve societal challenges in your local community or industry.

      Embrace agentic AI systems
      Start experimenting with multi-agent AI systems to tackle complex, multi-faceted problems in your field.

      Collaborate globally
      Seek out international partnerships and datasets to bring diverse perspectives to your AI projects.

      Interesting read?ѻý’s Innovation publication,Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from ѻý, with a special mention of our external contributors from, and. Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

      Meet our authors

      Dr. Daniel Kühlwein

      Managing Data Scientist, AI Center of Excellence, ѻý

      Mike Miller

      Senior Principal Product Lead, Generative AI, AWS

      Gwang-Chol Chang

      Chief, Section of Education Policy, Education Sector, UNESCO

      James Aylen

      Head of Wealth and Asset Management Consulting, Asia

      The post The grade-AI generation: Revolutionizing education with generative AI appeared first on ѻý Mexico.

      ]]>
      545160
      Seven predictions for 2025 /mx-es/insights/expert-perspectives/seven-predictions-for-2025/ Mon, 14 Apr 2025 12:26:35 +0000 /mx-es/?p=545156&preview=true&preview_id=545156 The post Seven predictions for 2025 appeared first on ѻý Mexico.

      ]]>

      Seven predictions for 2025
      What’s hot in data, analytics, and AI?

      Ron Tolido
      April 1, 2025

      Peering into the future is a tricky business – especially in the ever-changing realm of data, analytics, and AI. But if there’s one thing we’ve learned, it’s that uncertainty never stopped us from trying. After all, we’re in a part of the technology profession where predicting the unpredictable is often part of the job description.

      We called upon seven of our data-powered innovation movers and shakers to dust off their (frozen) crystal balls and share their visions of what 2025 has in store. Their insights reveal a world where AI balances on the edge of legality, cloud platforms morph into something entirely new, and synthetic data booms with promise – and no, it’s not artificial hype. From “vertical AI” that digs deep into industry needs to conversational AI that knows what you want before you do, these trends give a glimpse of the fascinating, yet challenging, future of data and AI.

      Will their predictions come true? Only time will tell, but one thing’s for sure: 2025 is shaping up to be a year we’ll be talking about for a long time to come. And data and AI are right in the middle of it.

      Let’s dive in.

      AI is not a crime

      AI is not a crime, though sometimes it feels like one, given its swift advance beyond legal bounds. While identity theft, deepfakes, and media manipulation emerge from Pandora’s box, many AI ethicists focus on fairness and transparency, skirting around AI’s darker uses. As AI matures and criminal applications surge, discussions will inevitably shift from theoretical ideals to practical realities, especially when organizations face public lawsuits under new AI regulations. This shift will force experts to tackle how AI can harm society directly. So, while AI is not a crime, it certainly invites a compelling conversation about AI and crime. Let’s face it: When it comes to AI, the real crime would be ignoring the conversation altogether. – Marijn Markus, AI Lead, Managing Data Scientist, ѻý and Data, ѻý

      Augment my governance

      Prepare to be captivated. AI agents are about to revolutionize data management in the upcoming year. They will shoulder burdensome data tasks, enabling companies to reach new pinnacles of productivity and efficiency. With AI seamlessly managing data collection, analysis, and access for us, we can finally focus on something much more crucial: getting value out of data for the business. It’s high time to achieve that, isn’t it? The future of data management is upon us, and missing the opportunity of augmenting is not an option. Because in this data game, those who augment govern – and those who don’t get governed. – Liz Henderson, Executive Advisor, ѻý and Data, ѻý

      Cloud encounters of the third kind

      As we look towards 2025, cloud data platforms apply for a new round of transmitted change. We all recognize the need for high-quality enterprise data as a foundation for relevant, trustworthy AI. Add to that the need to adhere to regulations, data sovereignty, privacy sustainability and cost. It soon becomes apparent that a smart mix of different and diverse cloud approaches for data will play a crucial role in the upcoming year. I expect to see a pendulum swing towards larger investments in cloud data platforms, yet it will be clouds of a different kind. Or, to put it differently, the forecast calls for cloud cover – but with a whole new kind of silver lining. – Prithvi Krishnappa, Global Head of Data and AI, Sogeti

      Let’s talk better

      Conversational AI will continue to be a hot topic in 2025. Contact center transformation, leveraging “classic” AI and generative AI, will help save labor costs by billions and improve customer service significantly. These technologies can handle routine inquiries and provide instant responses, freeing human agents to deal with more complex issues. Imagine a world where you no longer have to press 1 for assistance – AI will anticipate your every need before you even know you have one. While human agents may become less central, customer satisfaction might reach an all-time high as AI enhances efficiency and personalization in ways we never thought possible. It’ll be the talk of the town in 2025. – Monish Suri, Global Google Partnership Lead, ѻý and Data, ѻý

      When AI goes vertical

      We will see a major rise in domain-specific vertical AI solutions that are finely tuned through rigorous test-driven prompt engineering. These purpose-built AI models will deliver more reliable and precise insights tailored to the unique needs of their industries. For instance, in healthcare, imagine AI predicting patient outcomes like a crystal ball, analyzing vast datasets of medical histories and treatment plans to conjure better patient care and optimized resources. In financial services, AI will become the ultimate fraud-buster, identifying unusual patterns in real time and safeguarding assets with previously unseen precision and confidence. Vertical AI solutions will not only streamline operations but also spark innovation by providing industry-specific intelligence and efficiency. The only way is vertical! – Dan O’Riordan, VP, AI and Data Engineering, ѻý

      The semantics of confidence

      We’ve seen many companies adopting the principles of data mesh and semantics as part of their modern data analytics platform strategy. If nothing else, it’s needed in 2025 and beyond to comply. For example, the EU AI Act requires close tracking of the purpose of AI models and the underlying data used to train it. This can only be done by enhancing data platforms with semantics, connecting original data sources, forged data products in AI models, and all business dashboards and AI-infused applications. It creates high levels of confidence in both data and AI, next to many new, innovative opportunities to leverage data. The endgame? Nothing less than a full digital twin of the enterprise, a hallmark of data mastery. – Arne Rossmann, Innovation Lead, ѻý and Data, ѻý

      Synthetic data boom

      I predict a boom in synthetic data. But first of all, what is synthetic data? It’s artificially generated but realistic data that mirrors real patterns without using sensitive information. Why is synthetic data crucial? It tackles privacy, security, data scarcity, and control issues. Traditional data sources are hitting their limits. Privacy laws are tightening, and real-world data often lacks the diversity we need. Synthetic data lets companies create datasets that mimic real shopping behavior in retail or complex production processes in manufacturing without exposing sensitive info or being held back by data gaps. I foresee that 2025 will be the year synthetic data moves center stage. Companies ready to leverage it will build powerful, adaptable models faster than ever. The synthetic data boom will be anything but artificial. – Dinand Tinholt, VP, ѻý and Data, North America, ѻý

      Interesting read?

      ѻý’s Innovation publication, Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from ѻý, with a special mention of our external contributors from, and. Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

      Meet our authors

      Marijn Markus

      AI Lead, Managing Data Scientist, ѻý and Data, ѻý

      Liz Henderson

      Executive Advisor, ѻý and Data, ѻý

      Prithvi Krishnappa

      Global Head of Data and AI, Sogeti

      Monish Suri

      Global Google Partnership Lead, ѻý and Data, ѻý

      Dan O’Riordan

      VP, AI and Data Engineering, ѻý

      Arne Rossmann

      Innovation Lead, ѻý and Data, ѻý

      Dinand Tinholt

      VP, ѻý and Data, North America, ѻý

      The post Seven predictions for 2025 appeared first on ѻý Mexico.

      ]]>
      545156
      ѻý’s winning hand: Receiving three Partner of the Year Awards at Google Cloud Next /mx-es/insights/expert-perspectives/capgemini-wins-three-google-cloud-partner-of-the-year-awards/ Mon, 14 Apr 2025 05:56:19 +0000 /mx-es/?p=545140&preview=true&preview_id=545140 The post ѻý’s winning hand: Receiving three Partner of the Year Awards at Google Cloud Next appeared first on ѻý Mexico.

      ]]>

      ѻý’s winning hand: Receiving three Partner of the Year Awards at Google Cloud Next

      Herschel Parikh
      Apr 8, 2025


      I’m thrilled to share that ѻý has achieved a triple win at the Google Cloud Partner of the Year awards.

      These awards recognize our innovative solutions and the significant impact we have made across various industries.

      • Global Industry Solutions
      • Sustainability Industry Solutions
      • Country: Denmark

      With nearly 15 years of collaboration with Google Cloud, we’ve unlocked incredible potential and value through our joint efforts. This partnership has consistently demonstrated the power of a combined approach in driving business transformation and exploring new possibilities.

      Reflecting on our growth from last year, this year highlights our strategic focus towards sustainability and industry-specific solutions. We are more committed than ever to addressing global challenges and creating value for our clients through sustainable and innovative solutions.

      Sustainability industry solutions

      One of the awards we received is “Sustainability Industry Solutions”. This award recognizes partners that helped customers in the sustainability industry achieve outstanding success through Google Cloud. Sustainability is a core component of ѻý’s DNA, and it is embedded in every service and solution we develop. Our collaboration with Google Cloud has enabled us to help clients become more sustainable. For instance, our Fractals solution enables end-to-end product-level data collaboration on pre-competitive supply chain issues, including ESG challenges such as food waste, health, decarbonization, human rights, and living wages.

      Additionally, our Business for Planet Modeling (BfPM) solution with Google Cloud is a set of climate risk advisory services designed to drive better climate risk analysis for the financial services industry. BfPM leverages Google Cloud’s analytics and artificial intelligence to simulate the financial impact of climate change and global variables, enhancing forecasting and supporting better decision-making.  We’ll be exploring these solutions in person, at Google Cloud Next.

      Global industry solutions

      In addition, we received an award for Global Industry Solutions. This award recognizes partners that leveraged Google Cloud solutions to create comprehensive and compelling solutions that made a significant impact across multiple industries and regions. Our deep industry expertise and use of Google Cloud resources, including generative AI, have enabled us to provide clients worldwide with tailored solutions. For example, our Industry Cloud for Grocers on Google Cloud has helped grocers enhance customer experiences while improving inventory visibility and profitability.

      Partner of the Year Award, Country: Denmark

      At Google Cloud Next, we’re hosting a to discuss their AI-driven demand forecasting approach using Google Cloud. This session will highlight our work in Denmark with Danfoss, a leader in energy-efficient solutions, and how they partnered with Google Cloud and ѻý to tackle demand forecasting challenges, stay competitive, and support global sustainability goals.

      Impact on our clients

      Our partnership with Google Cloud has brought significant benefits to our clients, and we’re proud of the successful projects which have driven value for them. In our recent lookbook, we talk about this in more depth. For instance, we modernized IT infrastructure with data cloud solutions at Wind Tre, processing 1,000 events per second and making 100 million decisions per day. We also created the first generative AI chatbot in Catalan using Google Cloud’s Vertex AI, preserving language and improving response times. Additionally, we helped L’Oreal connect the physical and digital worlds using a digital twin solution on Google Cloud.

      These accomplishments showcase our ability to leverage Google Cloud’s capabilities to deliver innovative solutions that address specific industry challenges and enhance customer experiences.

      A big thank you

      These achievements would not have been possible without the hard work and dedication of our teams and the incredible partnership with Google Cloud.

      Looking ahead, we have ambitious goals for our partnership with Google Cloud, and we really look forward to bringing these accolades to life through our participation at Google Cloud Next, as a Luminary sponsor. It would be great to meet you there at booth #2240, Apr 8-11 or to discover how we’re helping companies achieve the potential of Innovation, meet intelligence.

      Author

      Herschel Parikh

      Global Google Cloud Partner Executive
      Herschel is ѻý’s Global Google Cloud Partner Executive. He has over 12 years’ experience in partner management, sales strategy & operations, and business transformation consulting.

        Find out more about our Google Cloud partnership

        The post ѻý’s winning hand: Receiving three Partner of the Year Awards at Google Cloud Next appeared first on ѻý Mexico.

        ]]>
        545140
        What are Liquid Neural Networks? And why should you care? /mx-es/insights/expert-perspectives/what-are-liquid-neural-networks-and-why-should-you-care/ Fri, 11 Apr 2025 05:29:44 +0000 /mx-es/?p=545076&preview=true&preview_id=545076 The post What are Liquid Neural Networks? And why should you care? appeared first on ѻý Mexico.

        ]]>

        What are Liquid Neural Networks?
        And why should you care?

        Pascal Brier
        Oct 25, 2024

        Earlier this week, our friends at Liquid.AI introduced their first products powered by Liquid Neural Networks (LNNs). This is a new generation of AI models that promise to achieve state-of-the-art performance at every scale, while maintaining a smaller memory footprint, more computing efficiency and much better transparency.

        But what are Liquid Neural Networks exactly? And why should you care?

        To understand, let’s consider the classical Large Language Models we’ve been building over the past few years.

        LLMs, like ChatGPT, are great statistical learners. Meaning they have to ‘’memorize’’ trillions of variations and patterns from an immense training dataset in order to coherently mimic those patterns in their outputs. This is why models are becoming better but also exponentially larger with each iteration. This reliance on scale is why LLMs have grown into models with trillions of parameters. To produce increasingly complex and nuanced outputs, LLMs need always more parameters, which means more data, more computational power, and a larger model size.

        This approach is becoming extreme, as constantly increasing the number of parameters to improve performance is both resource-intensive and costly. In our race to develop Generative AI, we are also increasingly scaling black boxes with little explainability.

        In contrast, Liquid Neural Networks (LNNs) offer the promise of fundamentally more adaptive and efficient model architecture.

        Instead of relying on larger and larger networks of simple neurons, LNNs use smaller networks of more capable neurons that adjust in real-time to new inputs. In simplified terms, these neurons are mathematical formulas that are adaptive – they can change their behavior based on new inputs. They adjust their connections and processing methods dynamically, like a formula that updates itself as new information comes in.

        Since these neurons are not static, they continuously evolve based on the information they process, allowing LNNs to learn on the go and adapt to new environments without needing retraining. This adaptability means that LNNs can perform complex tasks with far fewer parameters. As a result, LNNs are better suited to handling dynamic, unpredictable situations, such as real-time decision-making in autonomous systems or robotics, where flexibility and efficiency are key.

        This is why LNNs have immense potential. Same or maybe better performance (future will tell) , but far less computational power, energy, and cost. This has an obvious impact on the sustainability profile of AI, but also opens up many more deployment options and use cases. LNN-based architectures enable AI deployments on smaller edge devices—such as mobile phones, vehicles, smart-home systems, airplanes, and industrial machinery—without relying on massive, cloud-based computing resources.

        Try to imagine a fully offline automotive AI system that runs efficiently on a standard PC CPU without needing specialized hardware like GPUs. Or an industrial robot that continuously adapts to new tasks and surroundings, making real-time adjustments as it learns from ongoing interactions.

        The implications of Liquid Neural Networks are profound. Their ability to deliver state-of-the-art performance with fewer resources and real-time adaptability represents a significant add-on in the evolution of AI. For business leaders and CxOs interested in staying ahead in the AI race, keeping a close eye on Liquid Neural Networks evolution is not just advisable—it’s essential. This could very well be the future of sustainable, efficient, and explainable AI.

        Meet our author

        Pascal Brier

        Innovation
        Innovation

          The post What are Liquid Neural Networks? And why should you care? appeared first on ѻý Mexico.

          ]]>
          545076
          From innovation to transformation: How AI agents are shaping the future of work /mx-es/insights/expert-perspectives/from-innovation-to-transformation-how-ai-agents-are-shaping-the-future-of-work/ Wed, 09 Apr 2025 06:06:05 +0000 /mx-es/?p=545015&preview=true&preview_id=545015 The post From innovation to transformation: How AI agents are shaping the future of work appeared first on ѻý Mexico.

          ]]>

          From innovation to transformation: How AI agents are shaping the future of work

          Gianluca Simeone & Chiranth Ramaswamy
          28 Jan 2025

          Imagine this for a future work experience: a user in procurement starts the day by asking their virtual assistant to create a purchase order.

          This is an action that requires only the basic facts, ranging from vendor and quantity to item and date, with no manual data entry needed to complete the process.

          Likewise, a manufacturing user asks their system to tell them what orders they are likely to miss today, and receives not just a detailed report of real-time progress against plan, but also a series of options for addressing potential problems.

          These and countless other use case scenarios form the true vision for business AI in the modern work environment. This vision is already building significant momentum, even while introducing various organizational and technical challenges – and it’s only a couple of years away from transforming the everyday interaction with digital applications.

          Early gains

          The pace of AI and Gen AI adoption is obviously going to differ by organization, depending upon individual business use cases and perceived benefits. But to identify the tangible value underpinning these considerations is first to identify a future state, and imagine a way of working that combines human and machine components into a complete and harmonized whole.

          Such thinking typically puts the focus on “quick wins” made possible by AI, including:

          • Automating manual, repetitive tasks, which can extend from data entry to scheduling and report creation, thereby freeing people up to focus on more creative and complex work.
          • Boosting user productivity: where individuals no longer need to access a range of systems to complete tasks and find answers, and instead rely on AI agents to do the heavy lifting – while proactively delivering insight before they even seek it.
          • Streamlining business processes: where agents offer recommendations and autonomously taking actions across a range of commonly completed tasks.
          • Increasing business resilience by proactively designing response plans to critical scenarios.
          • Supporting complex SAP implementations: for example, supporting the project teams activities on RISE with SAP and GROW with SAP integration, working with an Augmented Software Development Life-cycle, and ensuring high data quality.

          According to the ѻý Research Institute’s report, Data-powered enterprises 2024, AI has the capacity to streamline business processes and enhance business resilience. This aligns perfectly with the potential of Gen AI to transform user experiences and create new revenue opportunities.

          All told, Gen AI promises to transform the user experience in terms of the way we interact with information and back-end systems, discover insights, and find inspiration. Just as importantly, the technology is rapidly introducing new revenue opportunities and removing “skills barriers” – such as by enabling people to create complex spreadsheet analysis based on a simple query.

          Bumps in the road

          When the potential of AI is combined with industry and business use cases, the reasons to act become even harder to ignore. Hence the growing focus today on removing any obstacles in the way – with the headlines being:

          • A lack of trust: a concern that spans ethical considerations as well as a resistance inside many organizations to actively experiment with new – and therefore unproven – technologies.
          • A seeming lack of maturity: where decision-makers are waiting on the technology to become “perfect” before committing, held back by talk of AI hallucinations and output bias.
          • Regulatory concerns: where frameworks such as the European Union’s AI Act 2024 aim to ensure AI systems are safe, transparent, and respectful of fundamental rights – but can impact future innovation.
          • Human nature: which sees people preferring the “comfort zone” that comes with traditional ways of working.

          This last point is understandable given the fact that AI brings with it a demand to change standard operating procedures. A transformation takes place in the way tasks are completed, to optimize the mix of human and artificial intelligence required at distinct touchpoints along the way.

          A dynamic move forward

          Overcoming these impediments is an important next step that requires the continued evangelization of AI and Gen AI from technology leaders. This is a task that ѻý is heavily involved in, helping our customers to better understand the most suitable options for Gen AI – while also providing training and education to master the different aspects of change management.

          Such support is a vital way station for any AI roadmap, as organizations seek guidance on the right approach and common pitfalls, as well as ways to introduce the necessary safeguards and appropriate ways to keep a “human in the loop.”

          The good news, certainly from a technical perspective, is that Gen AI does not require major changes to existing IT environments, especially when the AI capabilities offered by SAP and global hyperscalers are taken into account. This situation might change with the advent of multi-agent AI systems, and the number of AI agents interacting autonomously – but that is a bridge most will worry about crossing when they finally reach it.

          Final thoughts

          Gen AI is often described as a train that is gaining speed. In this context, the key question facing organizations is when to get onboard: should they join now while advances are steady, or risk trying to gain access when the locomotive is hurtling through the station at full throttle?

          What’s emerging as best practice is the idea of starting small and validating the potential of Gen AI for different use cases. This approach focuses on non-business-critical processes that can be addressed by out-of-the-box functionality available from providers like ѻý and SAP. Once these initiatives prove their value, organizations gain the confidence to proceed with more advanced design strategy to tackle the bigger task of integrating Gen AI into the very fabric of day-to-day operations.

          Ultimately, it comes down to one overriding thought: how to ensure your business doesn’t get left behind.

          Watch this space for our next blog post.

          Author

          Chiranth Ramaswamy

          Senior Director, Global SAP CoE
          Chiranth is a Global Gen AI Ninja and part of the ѻý SAP CoE. He leads delivery of Gen AI Projects, training of associates and exploration of advances in Gen AI and has lead the build and deployment of Gen AI based tools and processes in ѻý’s SAP projects. His role as SAP India Industry leader involves the development and use of ѻý’s Industry solutions including industry reference models built on Signavio, Pre-configured S4/HANA industry solutions and line of business solutions tailored to SAP’s Clean Core approach.

            The post From innovation to transformation: How AI agents are shaping the future of work appeared first on ѻý Mexico.

            ]]>
            545015
            Reengineering the heart of utilities with augmented service /mx-es/insights/expert-perspectives/reengineering-the-heart-of-utilities-with-augmented-service/ Wed, 09 Apr 2025 01:30:58 +0000 /mx-es/?p=544967&preview=true&preview_id=544967 The post Reengineering the heart of utilities with augmented service appeared first on ѻý Mexico.

            ]]>

            Reengineering the heart of utilities with augmented service

            Christian Schacht
            10 Nov 2022

            Three ways in which E&U companies can leverage a data-driven customer service function as a business driver

            The restructuring of the retail energy market has given consumers unprecedented levels of choice and flexibility when it comes to their energy needs. For traditional utilities, this means they need to reprove their value by providing customers with exceptional service, but also valuable new services.

            Unfortunately, data suggests that for many legacy players, there may be quite a bit of ground to make up on the customer service front. According to market research companies Forrester and Harris Interactive, 75% of energy and utility (E&U) customers believe it takes too long to reach a live agent, while 72% are frustrated at having to explain their issue multiple times due to poor customer service and a lack of omnichannel presence.

            To compete and win in today’s market, utilities need to treat data as a strategic asset and put customer service at the heart of the organization. This means reframing customer service from a cost center to a growth engine. Here are three ways E&U companies can leverage customer service to advance the business:

            1. Customer service as a sales driver.


            Every contact with a customer service channel is not just an issue to resolve – it’s an opportunity to grow. When equipped with the right information, systems, and tools, agents will become brand advocates, building rapport with customers to not only fix issues but guide growth for their brand.

            The good news is that E&U organizations have a tremendous amount of data at their disposal to proactively identify customized recommendations and priority opportunities for each customer. An advanced customer service function can provide agents with access to these insights via a simple and intuitive interface. Suggested actions can be generated in real-time to guide a personalized customer journey and enable valuable cross- and up-selling opportunities.

            For example, if an external data source indicates that a customer recently purchased an electric vehicle, one can infer that they may need a home charger or a home battery storage system. Even if the customer called to inquire about a bill or update their billing details, customer service agents can take this opportunity to engage the customer in a timely, relevant conversation about their needs – and meet them in the moment.

            2. Customer service as a cost reducer.

            There are two basic ways companies can boost profitability: 1. By increasing revenue or 2. By reducing costs.

            Strong customer service can help the business reduce costs in many ways, but perhaps the most notable is in staunching losses related to customer churn.

            As the costs associated with acquiring a new customer grow year over year and increased competition makes the playing field even more crowded, E&U companies need to be hyper-focused on customer retention.

            Unfortunately, most large outsourced contact centers, the likes of which were introduced as a cost-cutting measure, are evaluated on metrics like average handling time. This has been shown to reduce incentives to improve net promoter score (NPS) and revenue per customer. In other words, a traditional customer service agent is motivated to solve an issue and end a call, whereas the business needs to create a model where service agents work to build strong, healthy, loyal customer relationships.

            By putting customer service back at the heart of the business, companies can begin to refocus on the indicators that actually matter to their business – retention, loyalty, revenue per customer, and NPS.

            3. Customer service as a brand differentiator.

            Let’s be honest: for most people, calling a customer service line isn’t exactly an enjoyable experience. Which raises the point: what would happen if it was?

            What would your customers do if they ended a call with a sense of accomplishment? What would happen if the agent connected with them on a human level and helped them solve the problem they called about and addressed another need besides? What would your company get if contacting customer service became less of a chore and more of a value-add service?  

            In an increasingly competitive market, it’s more important than ever for brands to provide meaningful, relevant interactions. Customer service is one of the most important touchpoints within the customer lifecycle and one of the most impactful too. For organizations that strive to position customer service as a differentiator, the focus is on the customer and their needs. In this model, the customer isn’t just a passive consumer, but an active part of the ecosystem.

            Repositioning customer service as a growth engine with Augmented Service from ѻý

            Due to the market challenges E&U companies of all kinds are facing, ѻý recently developed and launched a new Augmented Service offering specifically for E&U organizations. Augmented Service brings together two critical components that help utilities revolutionize their customer service function:

            1. Rich data-driven insights to identify trends, unmet needs, portfolio opportunities, and priorities
            2. AI-enabled Customer Data Hub to empower personalized interactions and guide the customer journey.

            With Augmented Service, utilities can establish a clear customer journey and omnichannel capabilities that drive lower cost to serve, higher NPS, lower churn, higher fix-at-first contract rate, and higher revenue per customer. With the help of Augmented Service from ѻý, customer service is more than a function – it’s a strategic growth engine.

            Is your organization ready to revolutionize your customer service function and transform it from a cost center to a growth engine? ѻý can help. Download our Augmented Service brochure and contact our team of experts to schedule a consultation today.

            Author

              The post Reengineering the heart of utilities with augmented service appeared first on ѻý Mexico.

              ]]>
              544967
              5G and security: are you ready for what’s coming? /mx-es/insights/expert-perspectives/5g-and-security-are-you-ready-for-whats-coming/ Fri, 04 Apr 2025 06:36:44 +0000 /mx-es/?p=538939&preview=true&preview_id=538939 5G opens up all new avenues of attack – is your organization ready?

              The post 5G and security: are you ready for what’s coming? appeared first on ѻý Mexico.

              ]]>

              5G and security: are you ready for what’s coming? 
              New risks and complex challenges require a comprehensive new strategy

              Chhavi Chaturvedi
              25th July, 2022

              5G opens up all new avenues of attack – is your organization ready?

              Every tech revolution comes with risks, and 5G is no exception. From IoT applications to the 4G – 5G transition, the scale of 5G usage is opening up an enormous surface area to potential attackers. The promise of high bandwidth + low latency in the coming years is extraordinary, but organizations that are slow to react to these threats are taking a gamble. Fortunately, there are a number of security measures that can substantially reduce these risks. Read on to learn how to keep pace with the security demands of 5G today.

              Security challenges

              Every promised benefit of 5G brings with it a corresponding risk. The number of connected IoT devices is growing at upwards of , on course to pass 14 billion this year. Each new edge-computing device creates new vulnerabilities for bad actors to exploit. The decentralized nature of IoT products makes security measures difficult to implement at scale, while 5G’s greater bandwidth has the potential to fuel new DDoS attacks with the power to overwhelm organizations. And the expansive nature of 5G itself poses new risks. As the number of users increases into the millions and billions and networks expand to accommodate more devices, network visibility plummets. It becomes harder to track and prevent threats, especially against sophisticated attackers. Device vulnerabilities, air interface vulnerabilities, RAN, backhaul, 5G packet core & OAM, and SGI/N6 & external roaming vulnerabilities all need to be re-examined. 

              Network Slicing is not enough 

              There are many services in today’s industries that require various performance measures such as high throughput, low latency, high reliability, etc., which can be achieved by network slicing, which integrates multiple services with customized local networks. In theory, network slicing should raise security – like the bulkheads on a ship, which contain a potential breach to one flood zone. This is the same logic behind IT network segmentation, which is an established best practice. However, just like network segmentation, network slicing alone does not guarantee that threats are contained. Without additional measures, they’re likely to pass seamlessly into the wider system. Network slicing also faces security challenges connected with resource sharing among the slice tenants and slice security coordination, which are fairly straightforward to solve, but do require attention. 

              Mitigation Approaches 

              Businesses deploying 5G-connected equipment need an up-to-date set of security solutions capable of monitoring and protecting against the new generation of cyber threats. The specifics will vary according to each user, but the backbone of the new strategy may look something like the following: 

              Security Edge Protection:

              Security edge protection is the foundation of 5G security, upon which all other strategic considerations rest. The following methods can help secure 5G edge installations:  

              • Encrypted tunnels, firewalls and access control to secure edge computing resources 
              • Automated patching to avoid outdated software and to reduce attack surface 
              • AI/ML technology to detect the breach and send alerts accordingly or act accordingly  
              • Continuous maintenance and monitoring for the discovery of known and unknown vulnerabilities  
              • Securing the edge computing devices beyond the network layer 

              Zero trust architecture: never trust, always verify 

              Zero Trust Architecture (ZTA) eliminates implicit trust by continuously validating a set of actions at every step. Based on perimeter-less security principles, ZTA requires each asset to implement its own security controls. It includes security features such as: 

              • Continuous logging, continuous monitoring, alerts and metrics 
              • Threat detection and response 
              • Policies & permissions 
              • Infrastructure security & secure software deployment lifecycle (supply chain security) 
              • Data confidentiality from service providers of both hardware and software 
              • Container isolation 
              • Multiple authentication and TLS security 

              Container-based technology

              Containers bring the potential benefits of efficiency, agility, and resiliency. expects that up to 15% of enterprise applications will run in a container environment by 2024, up from less than 5% in 2020. Containers are orchestrated from central control planes which are configurable, used for scaling up and down workloads, collecting logs and metrics, and monitoring security. Containers bring a few unique security risks, but they are solvable.  

              When containers run in privileged mode or as root, they provide attackers with direct access to the kernel, from which they can escalate their privileges and gain access to sensitive information. It is therefore essential to add role-based access control and limit permissions on deployed containers. It’s easy to run a container in non-root, simply by providing instructions in the docker file. Two more ways to enhance container security are by rejecting pods or containers in privileged mode, or by keeping privileged containers but limiting access to the namespaces.  

              Automated operations and AI 

              The complexity of 5G infrastructure requires security applied at multiple levels. Handling complex security such as threat, risk, different devices, scaling etc, is so difficult manually as to be impractical. Additionally, manual operations introduce an element of uncertainty which may in some cases be exploited. There is absolutely a place for human ingenuity. But increasingly the operations level needs to be automated. 

              What about AI/ML technologies – are they helpful, or just hype? Currently, . They already have a role in security, primarily in . The next step in AI/ML-based security will involve deep learning, through which the system builds its own capabilities through experience – theoretically going so far as to predict threats before they’re deployed. Claims about revolutionary AI protection need to be considered very sceptically, but at the same time the potential for AI to fundamentally alter network security is real. This is a space to watch. 

              Building on firm ground 

              The ѻý Research Institute recently probed organizations’ preparedness to cyber-attacks and revealed a concerning level of disconnect: 51% of industrial organizations expect cyberattacks on smart factories to increase over the next 12 months, and yet nearly that same number (47%) report that cybersecurity is not a C-level concern. We see the lack of a comprehensive, system-wide approach to security as a serious long-term threat. 

              It is tempting to describe security breaches as instantaneous, but in fact, an honest examination often reveals vulnerabilities that had been left out in the open for months or years, with no adequate security protection. Security you can rely on starts early, with solid fundamentals across people, process, and technology. It’s not easy, but it’s doable.  

              We can see the risks that come with 5G. Let’s put a security plan in place now. To learn more about our 5G security capabilities, contact us below. 

              Telcoѻý is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

              The post 5G and security: are you ready for what’s coming? appeared first on ѻý Mexico.

              ]]>
              538939
              ѻý wins CyberArk’s 2023 MSP global partner of the year award /mx-es/insights/expert-perspectives/capgemini-wins-cyberarks-2023-msp-global-partner-of-the-year-award/ Fri, 04 Apr 2025 04:24:00 +0000 /mx-es/?p=540317&preview=true&preview_id=540317 The post ѻý wins CyberArk’s 2023 MSP global partner of the year award appeared first on ѻý Mexico.

              ]]>

              ѻý wins CyberArk’s 2023 MSP global partner of the year award

              Andrew Critchley
              14 Feb 2024

              CyberArk, a pioneer in privileged access management (PAM) and a global leader in identity security, has announced the winners of its 2023 Global Partner of the Year awards, recognizing its top-performing partners globally. ѻý proudly received the Managed Security Services (MSP) Partner of the Year award, showcasing our unmatched ability to deliver exceptional Managed Security Services in collaboration with CyberArk.

              Clay Rogers, Vice President of Global Strategic Alliances at CyberArk, remarked, “ѻý’s recognition as our MSP Partner of the Year was driven by many factors, including the strength of their Managed Services offer around CyberArk, the joint innovation activities we have underway to take MSP to the next level, and ѻý’s commitment to CyberArk at all levels in our organisations. This award underscores ѻý’s ability to bring tangible business value to their customers through CyberArk technologies and is a testament to the effectiveness of our partnership. Congratulations on this well-deserved achievement.”

              The award being announced by Chris Moore , Senior Vice President, Global Channels, CyberArk

              The award being announced by Chris Moore , Senior Vice President, Global Channels, CyberArk

              ѻý’s Managed Services Powered by CyberArk

              In today’s digital landscape, identity and access management (IAM) plays a critical in protecting organizations from increasingly sophisticated cyber threats. Our global partnership with CyberArk equips us to offer cutting-edge IAM and PAM solutions, empowering clients to secure their digital assets and ensure regulatory compliance effectively.

              For many years, ѻý has been a trusted partner of CyberArk, delivering managed and professional services to clients. Additionally, we utilize CyberArk technology to secure our broader managed services and internal systems. This collaboration has led to the development of ѻý identity-as-a-service (IDaaS), our optimized, pre-packaged managed service for IAM, built on CyberArk’s PAM capabilities.

              The CyberArk MSP Global Partner of the Year award recognizes our ability to deliver market-leading managed services in IAM through ѻý IDaaS. It also acknowledges our commitment to working closely with CyberArk to deliver innovative, value-based solutions to clients. Whether it’s safeguarding privileged accounts, securing critical assets, or ensuring regulatory compliance, ѻý remains dedicated to delivering IAM solutions that enable organizations to succeed in the digital age.

              ѻý’s Leading Identity and Access Management Practice

              ѻý boasts one of the world’s largest IAM practices, offering IAM advisory consulting, IAM professional services, and IAM managed services across all aspects of Identity to major organizations worldwide. Our extensive ecosystem of IAM technology partners, including market leaders like CyberArk, Ping Identity, and SailPoint, enables us to provide best-in-class services tailored to each client’s unique needs. IAM is a core capability within our global Cybersecurity practice, allowing us to deliver comprehensive cybersecurity services encompassing identity, zero-trust, cloud security, and cyber defense centers.

              As AI-based, context-sensitive, adaptive risk-based approaches to IAM become increasingly prevalent, we foresee identity security becoming tightly integrated within the broader ecosystem of managed security services. By integrating IAM more deeply across the enterprise and leveraging diverse cybersecurity signals, services can be effectively secured in modern open environments. Our goal is to leverage our combined cloud, AI, and cyber capabilities to drive innovation in IAM, working closely with partners like CyberArk to redefine identity and meet our customers’ business needs today and tomorrow.

              For more insights, click here.

              Author

              Andrew Critchley

              Experto en Identity and Access Management, Security Architecture

                The post ѻý wins CyberArk’s 2023 MSP global partner of the year award appeared first on ѻý Mexico.

                ]]>
                540317