Events and activities are substantiated by a stream of publications stemming from the work of the Strategic Business Analytics Chair’s research teams—Professors, PhDs and Students and also in collaboration with external partners in some cases. While some publications are strictly academic in nature, others are accessible to a broader audience and engage the public in the process. External think tanks and the media are involved as appropriate.
CSR policies in East Asia and Europe: quantifying influences and differences
by Mathilde Bernard, 06.03.22, supervised by Jeroen Rombouts
Why is that North-East Asia and Europe, two regions with mainly developed countries, perform differently on corporate and social responsibility (CSR) topics?
Our cross-cultural literature and data analysis brings us the following three insights:
First, North-East Asian companies score significantly lower than their European counterparts. This gap is particularly visible in Environmental and Social scores. However these are the two categories where North-East Asia is also catching up rapidly. We predict that North-East Asia will start catching up on Europe within five years.
Second, North-East Asian and European companies don’t share a common philosophy of corporate and social culture. While Asia is more driven by duty and results, European companies tend to be oriented towards personal achievements. In addition, CSR is a subject that is strongly politically guided.
Third, a major problem lies in the frameworks currently used to analyze and value CSR performance, as they are built on a Western value set and perspectives. This prevents North-East Asian companies to be graded on features that are relevant for their activities and societies.
To conclude, there is a need for a globalized framework that allows companies to be compared on an international level on their CSR performance.
Read the full study here.
by Jeroen Rombouts , 21.04.21 Co-written with Regis Amichia, Data Science Lead, Foxintelligence
For a downloadable document, click here.
Over the last years, many organizations have been investing substantially in data and analytics. The objective is to become more data-driven and become a tech-style organization. Companies willing to go further than just symbolically profiling the organization invest in AI to go from descriptive analytics to predictive and prescriptive analytics. This requires a solid data and AI governance program, IT infrastructure that makes all data readily available in a so-called data lake, and piloting of the organization through a carefully selected key performance indicator portfolio. It has been widely documented, however, that the most important hurdle is the change to a culture that embraces agility and experimentation. In fact, it is the humans that need reskilling. As a consequence, training programs have been launched and large organizations can now boast about their hundreds of use cases created by interdisciplinary teams which are shared on an internal repository for further development and innovation. The hard question comes next: what is the return on these huge investments? Why are so little AI use cases in production and where is the generation of tangible value? There seems to be a gap that needs to be filled and MLOps are bringing part of the answer.
Before going into MLOps, let us take one step back. It has always been a brain teaser for the software development community to find the best methodology for project management. It started with the waterfall approach, introduced in the 70s by Winston Royce. This linear approach defines several steps in the software development lifecycle: requirements, analysis, design, coding, testing, and delivery. Each stage must be finished before starting the next and the clients only see the results at the end of the project. This methodology creates a “tunnel of development” between gathering the client requirements and the delivery of the project. For many years, this linear approach has been the cause of tremendous loss in resources. An error in the design stage or the clients changing their mind required rebooting the development process. Furthermore, engineering teams were clustered in different stages (developers for coding, QA teams for testing and Sysadmin for delivering) which created friction and a fertile ground for communication errors. This is one of the reasons which led to a new methodology which started around 2001: the agile approach.
Agile principles have infused the software engineering culture for more than 20 years. It has endowed companies with the ability to adapt to new information rather than following an immutable plan. In a fast-changing business environment, it is more a question of survival than a simple change of methodology. Now, companies put customer involvement and iteration at the heart of the software development process. They bring together engineers with complementary skills within teams coordinated by product managers to regularly release pieces of software, gather feedback and adapt the roadmap accordingly. This was a true revolution, but it was not perfect: there was still a gap between software development and what happens after the software is released, also known as operations. In 2008, Patrick Debois and Andrew Clay filled this gap with the DevOps (contraction of development and operations) methodology. By bringing all teams (software developers, QA and Sysadmin) together in the development and the operations processes, waiting times are reduced and everyone can work more closely, in order to develop better solutions.
Back to today, what can bring DevOps today in the era of artificial intelligence? The needs are the same: companies are looking for methodology to develop and scale AI algorithms to generate value and reap the benefits of their investments. Data leaders recently began to investigate the benefits of the Devops methodology. However, machine learning and AI algorithms have a peculiarity that drastically differentiate from traditional software: the data.
Data is everywhere and has become a tremendous source of value for companies. The recent advances in fundamental research and the democratization of machine learning through open-source solutions has made artificial intelligence accessible for all. Data scientists are one of the most sought-after profiles in the current job market as they promise to be the key factor in unlocking the value of data. But in the same way that software developers needed Devops methodology to maximize their productivity and scale software development in controlled and secured environments, data scientists need a framework to develop and scale AI-powered solutions. Since those solutions are different from traditional software, they need to be managed accordingly. Therefore it is essential to use Devops practices, but data leaders also need to acknowledge the singularity of using data within software that makes decisions autonomously. This is where Machine Learning Operationalization (MLOps) comes to rescue.
MLOps is a set of practices, bringing Devops, machine learning and data engineering together to deploy and maintain ML systems in production. This is the missing piece which allows organizations to release the value contained in data using artificial intelligence. With formalization and standardization of processes, MLOps fosters experimentation but also guarantees rapid delivery, to scale machine learning solutions beyond their use case status. Once the solutions are in production and consume new data, monitoring predictive performance is key. Universal outperforming ML solutions for specific solutions simply don’t exist, hence organizations need monitoring predictive performance in real time. MLOps helps monitor this performance and acts in case deterioration due to concept drift occurs. The automation of the collection of lifecycle information of algorithms, that is tracking what has been recalibrated by whom and why, allows improving the learning process and reporting to auditors if required. Hence, accountability and compliance issues can be addressed.
While most data training programs focus on the elements of machine learning, statistics and coding, and work on use cases in a sandbox environment, MLOps principles are not yet covered extensively. Furthermore, business leaders invest in AI without fully understanding how to create an efficient development and operations environment for their data teams. Filling the gap between data and operations is not straightforward. The complexity of ML algorithms, often considered as a black box run by data scientists who are supposedly the only ones in the company to understand what they are doing, separates others from the development process and creates another gap between AI and business.
MLOps does not only concern engineers: every stakeholder of data-based solutions should be involved. The revolution of artificial intelligence is undoubtedly happening now, and all those who intend to be part of it will have a role in creating and running MLOps processes in their organization. Future data leaders should acquire basic MLOps skills in their training programs to remove the harmful and unnecessary boundary between business leaders and engineering teams around data-related topics.
We live in a world of experience. As people are increasingly always on - always connected, they are looking for experiences that are relevant and personalized for them, in the moment. Whether as a consumer, citizen or worker, people expect experiences made for them and delivered instantly. Artificial intelligence enables and accelerates every organization’s ability to shape this new dynamic.
Let’s take a couple of examples. A cruise company, Carnival, is using AI to offer its guests a uniquely personalized experience from before they even board a ship to the moment they disembark. AI acts as a brain that anticipates what guests want and need and then coordinates all Carnival’s people and resources on-board to deliver uniquely personalized experiences for everyone. Or look at the Albert Einstein Hospital in Sao Paulo. Here, AI is managing patient flow from initial consultation through to admission and treatment. The result? Transformational levels of efficiency and improved care.
Transforming experiences everywhere
AI has the potential to transform the end-to-end processes of any organization. It can offer more accurate demand prediction, automate the supply chain and deliver more efficient and personalized customer service. By harvesting and analyzing ever-greater volumes of diverse data from a growing range of sources, AI is completely changing the experience of users, customers, employees and the wider society.
So what do we mean by a great experience? It has a number of key dimensions: personal, trusted, natural, intuitive, predictive, focused, immersive and even beautiful. AI enables these qualities through its ability to deliver personalization at scale. It can tune into and predict individuals’ intentions, preferences and behavior. That enables natural and intuitive experiences, optimized to an individual’s specific context and needs.
AI = outperformance
Organizations exploiting AI’s potential outperform their competitors. Accenture’s France Future Systems Research report shows that the top 10% of companies surveyed were more likely than the bottom 25% of performers to have adopted AI early and to have developed expertise in AI.
Accenture research shows that companies that have effectively embraced AI achieved nearly triple the return from AI investments than companies that have yet to embrace the equivalent technology. Organizations that are deploying AI at scale are seeing transformational change across their business, from demand prediction, to automated supply chains and superior customer service.
A new age of customer experience
AI’s impact is already clear for today’s consumers, who expect experiences that are “always on, always me”: personalized, instantaneous and available at all times.
Around the world, AI is helping businesses meet those demands. How? By constantly drawing on and analyzing data from millions of interactions. The resulting insights enable organizations to adapt around the evolving needs of their customers and offer them relevant experiences, in the moment. Avianca Airlines, for example, has developed a chatbot to reduce their response time to customers. Spotify uses AI to tailor music recommendations according to a user’s listening history. McDonald’s and KFC are developing the use of AI to predict orders based on a customer’s previous purchase habits.
And AI doesn’t just improve existing customer services: it also creates entirely new ones. In the beauty industry for example, Shisheido uses AI to provide its customers with personalized skincare recommendations – all based on a selfie. The consumer uploads a picture to the company’s Optune app and AI does the rest. It examines the picture and combines that analysis with data about the external environment and the individual’s health and mood to create a uniquely personalized experience. L’Oréal employs AI and augmented reality in virtual try-on services to push makeup and hair color products, using technology from its recently acquired company ModiFace.
A revolution in public services
Businesses are at the forefront of AI development and adoption. But governments are also recognizing AI’s potential to transform the experiences they create and deliver. The US Department of Defense, for example, uses AI to help plan deployments during crises, while NASA employs bots to aid in its finance and procurement processes.
In the near future, AI will improve an even wider range of public services and change how people live. It will help public healthcare practitioners to predict illness and create personalized and preventative treatment plans for citizens. New autonomous mobility solutions could transform public transport infrastructure, making it more efficient and cheaper to run. And AI can help elderly people vulnerable to loneliness to interact and share their experiences.
So in every sector and in every sphere of life AI is changing the art of the possible. Exciting? Yes. Challenging? Undoubtedly. But no-one can avoid the impact of what AI brings. This is no longer an issue for the future. AI is real and happening today.
This article was co-written with Jean-Pierre Bokobza, Senior Managing Director, Accenture
by Jeroen Rombouts , 30.11.20 and Gerard Guinamand, Group Chief Data Officer, Engie
Over the last few years, the web giants have shown that using data to know your customers is key for developing new products and services and for beating the competition. These companies operate on a digital core, allowing data-augmented and data-driven decision-making, and are highly appreciated by investors given their massive market capitalisations. Even during the COVID-19 pandemic, the tech sector continued growing spectacularly given the acceleration in digitalising the way we work and interact with others.
Any large, mature company is inspired by the way tech companies operate and dominate. For example, in France, L’Oréal aims to become the top beauty tech company by using artificial intelligence and augmented reality. Since 2018, Carrefour has used the Carrefour-Google Lab to accelerate its digital transformation. Danone and Microsoft launched The AI Factory for Agrifood in 2020. Energy companies like Engie and EDF are pushed by the general public sentiment on climate change to become operationally excellent and greener. Young data talents are hired to help transform the companies and introduce the new data culture.
In theory, the smart use of data and the creation of business value makes a lot of sense, though in practice traditional companies struggle within becoming more data-driven. Companies have been investing largely in data infrastructure over the last years, appointed chief data officers, and launched data training programs to convince every employee of the salient features of data and analytics. Consequently, massive amounts of data are stored in the cloud and often the question now is “what can we do with this?” or “what is the actual return on all these data investments?”. To answer such questions, a next step in the data maturity process of the company is essential. This is the step towards becoming more data-informed, data-driven, and operationally excellent, and it requires using data to look forward, rather than backward, and therefore make predictions. To put it differently, after storing and categorising data, it is now time to use it for decision-making across all levels, rather than specific pockets, of the organisation. In fact, business decisions always implicitly include predictions, and it is time to make this process more formal and automatic thanks to the use of data.
The predictive paradigm is not only about recommendation algorithms and the like: it also allows for the use of data at the highest executive level to ensure that strategy is implemented. Specifically, the management of forward-looking key performance indicators (KPIs) allows for measuring and tracking the success of the company and setting clear objectives. This in itself generates valuable data that can be correlated with new initiatives to predict their success and gain deeper understanding of their link with existing operations. To sum it up, C-level executives need to start implementing strategy with data rather than strategy for data, so that the company’s operating model can become data-centric in the same way as famous tech companies like Amazon and Ali Baba.
Someone once said, “Making predictions is hard, especially about the future”. Predictions are by nature uncertain and this has to be incorporated when making business decisions, similar to financial investors that use more information than the average return on an asset for deciding to buy or sell it. Accurate predictions are obtained when combining varying sorts of data, including external sources like weather-related data in energy applications. How to actually produce predictions using data is not a trivial task. It demands talented data scientists and advanced algorithms, plus continuous performance monitoring. It is not a surprise that the International Data Corporation expects global spending on artificial intelligence to increase from 43 billion EUR to 94 billion by 2024.
The International Energy Agency expects renewables to provide 80% of the growth in global electricity demand through 2030. In fact, solar- and wind-energy projects have become less expensive, and interest rates are historically low today. Furthermore, governments are highly supportive, exemplified by the European Green Deal, whose main ambition is making the EU climate neutral by 2050. Renewable energies are therefore becoming a key strategic goal for the energy companies. Technically speaking, renewable energies (in particular wind and solar) have a high level of intermittence (night, absence of wind) but we cannot increase the number of plants to compensate for this lack of production for economic and environmental reasons. This situation implies two main actions for energy companies such as ENGIE. First, identify the best sites for new implementations. Second, get the best performance from the plants while taking into account operating constraints (noise for example) for a higher volume of electricity generated.
Data plays an essential role in the success of renewable development because it enables: the best selection of the optimal sites based on topography and weather forecast data, the best results based on technical availability, real-time weather and measurement data related to environmental constraints (noise), and optimizing of electricity sales by combining production data with data on demand, market prices, and storage capacities.
In terms of return on data investments, renewable energy forms a perfect use case where the value from data can be made explicit. In fact, these sources of energy are highly sensor equipped allowing for predictive maintenance, and for data monetisation. On the production side, there are no GDPR concerns. Operational excellence in the renewable energy business will be the only way to survive for incumbents. Traditional oil companies such as B.P. and Total are rapidly transforming themselves and will compete fiercely with the current energy players in the market of renewable energy.
ENGIE — ESSEC
Engie and ESSEC Business School have been working on different cases for three years as part of the Strategic Business Analytics Chair sponsored by Accenture. The Chair’s main objective is to train the next generation of leaders to develop new business strategies, leveraging the numerous applications of advanced analytics. Through a hybrid learning method based on innovation, collaboration and entrepreneurship, the Chair acts as the core of an ecosystem combining data and value creation – from purpose and strategy crafting to transformation, encompassing problem solving, data science & artificial intelligence, culture change and skills development.
Engie is an important part of the Strategic Business Analytics Chair’s ecosystem. In 2021, the Chair students will work on two strategic cases on renewable energy. Their fresh and forward-looking vision on the topics generates innovative ideas and valuable solutions. ESSEC students are particularly interested in working with companies like Engie given its strong environmentally-oriented strategic values. Indeed, the students, being concerned about climate change, prefer to work on business cases that ultimately generate societal value rather than purely commercial cases for e-commerce platforms.
In the future, renewable energy will lead to the creation of many new jobs requiring technical, data and analytics skills. The EU reports that already the solar photovoltaic industry alone accounted for 81,000 jobs with expected increase to 175,000 and 200,000-300,000 jobs in 2021 and 2030 respectively. Digitalisation and renewable energy go hand in hand and will be an important driver for economic growth. The Partnership with Engie and ESSEC will guarantee that young talents are trained and acquire the skills to make sure the transition to a green society is completed satisfying the climate change agreements.
More generally, it will be the companies which employ the people with the right skills, mindset and vision that will make the difference. Data is now available, most analytics tools that create value are standard, and computational resources have little constraints. It is the culture of the company that requires a fundamental change. Those who will be able to attract young “data ready” business graduates are going to be at the competitive edge.
Accenture mentions in its Technology Vision 2020 that the tech-clash is a new situation, where on one hand, people are enthusiastic about technology, data and artificial intelligence, but on the other hand, they require algorithms to be understandable and fair, and know where their personal data is used. This balance will be extremely important in the post COVID-19 era, where all that matters will be human experiences.
Since the start of the COVID-19 pandemic, companies have had to accelerate their digital transformation. This implies increased investments, so substantial that they require C-level support. The stakes are high for organizations. From accelerating sales to optimizing operational processes, digital impacts the value chain in every aspect. If the digital revolution generates an inevitable modernization of companies and a hope of value generation, it also provokes a major challenge for organizations: Data.
Data from transactions, customers, products, etc. invades the daily operations of organizations, constituting a potentially valuable asset, but above all an important challenge in terms of governance and management. Organizations must increase the understanding of these data as part of their transformation.
In the very short term and in an uncertain time, data becomes more crucial than ever to identify the levers of performance of companies. Optimizing costs, increasing business revenues, and driving process efficiency are all initiatives based on the availability of relevant data. As the decision cycles accelerate, many decision-makers will no longer be able to drive their businesses with approximate and often inaccurate data. Having good data - and just in time - has become a pressing necessity. But this prospect seems attainable only if the data heritage is better mastered. This is precisely the purpose of the "Data Footprint" method designed by Kearney and Essec. Evaluating the data footprint now constitutes an essential approach to secure investments and increase control over data assets.
The Data Footprint approach introduces a virtuous practice that aims to understand the data heritage, risks, challenges and limits linked to data within organizations. The Data Footprint is an evaluation process based on a 360° analysis of the data required as part of a company initiative steered by the entity in charge of Data Governance.
The aim of the Data Footprint is to assess the data assets to establish a risk assessment score. Based on multiple dimensions of analysis such as data quality or security, our method allows a quantified assessment of the data heritage in an organization. Today, the data heritage is still poorly controlled and exploited in many companies. What is the quality level of critical data sets in the organization (e.g customers/suppliers’ data)? What is the level of risk associated? What is the degree of control and ownership of data in the organization? These questions are often asked by decision makers without concrete answers based on a structured assessment. The complexity of information systems combined with the lack of governance make the data equation often complex and costly.
The Data Footprint allows companies to get a tangible data assessment across multiple dimensions in order to establish a risk score. The purpose of such a measure is to be able to accurately assess areas of weakness and to monitor data heritage improvements. The approach also allows internal and external benchmarks based on a standardized analysis grid.
The strategy for implementing a Data Footprint should be progressive while focusing on the critical data sets in the context of companies’ major programs, projects or business transformation initiatives.
The approach should involve several collaborators, at least representatives of business lines and IT, who jointly use a score sheet based on the following five dimensions:accessibility and availability,quality, ownership,risks, and identification of the future users. The overall score calculated on these five dimensions can range between 0 and 15, the lower the score the higher the risk related to the enterprise initiative.
Consider as an example a company specializing in the distribution of electronic equipment to the general public through its distribution network of more than 2,000 stores. As part of its data strategy, the company decides to launch a priority project that deploys a “Customer-centric” approach in order to increase customer value. The objective is to capture a better understanding of customer preferences in order to meet their expectations. The company anticipates a significant potential risk linked to data (availability, quality, etc.) and decides to launch a Data footprint approach.
The total Data risk score for this company was less than 5 in the evaluation exercise. On the recommendation of the Chief Data Officer in agreement with the rest of the team, the decision to launch the project is postponed pending the implementation of a specific data related action plan. This approach allowed the company to apprehend a major risk related to data on this project. Indeed, a rapid launch of this project without prior assessment would have potentially led to failure with economic consequences (losses estimated at a few hundred thousand euros). The approach also made it possible to initiate collaborative work around the data over the entire duration of this assessment (one month), and thus avoiding internal misunderstandings about the responsibilities of the various stakeholders (Business lines, IT teams, etc.). Finally, a clear action plan could be drawn justifying the investment of technical and human resources to upgrade the information system.
For a more technical version of this article or further details on the Data Footprint, please contact:
Reda Gomery, Vice President, A.T. Kearney, Reda.Gomery@kearney.com
Jeroen Rombouts, Professor, Essec Business School, firstname.lastname@example.org
Hiring data scientists is not an easy task - and neither is keeping them. The problem lies in a mutual misperception. On the one hand, data scientists get rigorous training in statistics, machine learning, and coding, and enjoy working in a learning environment tackling specific problems. On the other hand, many companies are looking for data talents that ask the right business questions and can rapidly acquire domain knowledge. Business schools have been developing programs over the last years to produce “data ready” students and therefore fill an important gap in the data skills job market.
Business school students following a data track in their curriculum are technically solid enough to collaborate with “pure” data scientists, and have simultaneously built up business acumen through intensive company-based data cases. In fact, before digging into data, they learn to ask the relevant strategic questions and project how their solution will bring value after passing through the data and analytics process. They understand the importance of processes, technology and culture and they master data storytelling to convince sponsors to scale their solutions.
The learning process to master data value creation relies on a strong framework, fueled by on-the-ground experimentations. It is slow and requires many iterations, trials and errors. Data graduates appreciate this and desire jobs that will continue offering challenging data valorisation problems, supervised by inspiring managers. Executives are realising that such supervised, hybrid learning environments are key if their companies want to succeed in becoming data centric. A natural question to ask then is “how do young data graduates see their dream work environment?”.
We are the first to have conducted a detailed survey to highlight the main aspirations of young data graduates. The results allow us to understand what fundamentally attracts them, how they see their future career and what matters to them on a daily basis. This also allows us to provide some key insights for executives and HR on how to better attract and retain data talents.
It turns out that data graduates are looking for a job where they can overcome interesting data challenges, complete various tasks and, above all, they want to be able to switch between different projects so they can upgrade their skills. Variety and transversality are seen as the fertile ground, rather than focusing too long on a specific topic. Unsurprisingly, data graduates consider the consulting environment as the most attractive; they are less focused on the business sector where they could work and consider large companies as interesting as the tech giants (such as Amazon and Apple), as long as they offer a wide range of projects. Remuneration is not a key argument for young graduates. They are aware of the scarcity of their skills on the market and the associated premium, and they know their remuneration will follow an ascending curve if they develop the right set of skills during their first jobs. More surprisingly, values or mission of their future employer have relatively little impact in terms of attractiveness.
In a nutshell, young data graduates seek above all to develop their expertise through various projects, in an agile and friendly working environment, supported by managers who have a strong tech skill set.
These insights provide an incentive for many companies to review their talent strategies, which often seek to attract students through communication about corporate challenges or HR benefits. On the contrary, the priority to attract and keep data graduates is to develop a culture of continuous learning, allowing a constant development of their skills, and to organize a 2 to 3 year career path for them that allows them to work on a chain of different projects, gradually increasing their accountability. This can be achieved by building and operating the right ecosystem, going beyond the frontiers of the company and beyond internal silos.
The role of the managers also turns out to be essential for attracting and keeping data graduates. Engaging them early on in the recruitment process allows them to go beyond employer-branding communication and attract students with solid arguments about the reality of their future job. Developing their coaching and collaboration skills and their agility in transforming data into value-driven initiatives will allow them to become the role models they need to be to grow and nurture their teams.
Building those two pillars at the right pace will prove the best way to match the expectations of young data graduates in the long run.
A few key figures
63% of young graduates consider “learning” as the most important professional value
47% consider having varied and interesting tasks is the first daily priority for their future job
41% consider “career development opportunities” as their top expectation for HR
83% of young graduates will not pay attention to the company’s mission for their future job
Fabrice Marque, Executive Director, Essec Business School
Jeroen Rombouts, Professor, Essec Business School
Arnaud Gilberton, CEO, Idoko
Timothy Lê, General Manager, Idoko
We would like to thank Joris Fayard and Kai-Lin Yang for their support in designing the survey and analyzing the data.
Further information can be found on:
ESSEC-CENTRALE Master in Data Science and Business Analytics