Skip to main content

Gartner: „The Future of Business Is Composable”

“Composable Enterprise means creating an organization made from interchangeable building blocks” (Gartner)

The idea of composable business operates on four basic principles:

  • More speed through discovery
  • Greater agility through modularity
  • Better leadership through orchestration
  • Resilience through autonomy

Accenture has worked on such composable architectures for SAP clients for years and has the leading offering in the market, which has already been applied at many large SAP clients.

Composable technologies are the tools for today and key to use for tomorrow:

  • S/4HANA is the new digital core, providing a stable and proven foundation for the enterprise.
  • The core is extended by cloud platforms like SAP BTP and, e.g., Google Cloud — These platforms provide all the pieces and connect the capabilities in a very flexible way.
  • Google Cloud, for example, is providing truly differentiating capabilities in Security, Automation, Cloud Coupling (Google Cortex) and AI/ML for industry-specific innovation.
  • The OVGU, together with Google Cloud and Accenture, offers applied research for a Composable Enterprise architecture incl. SAP cloud coupling and also trainings.

Whitepaper – 3 Steps for Digital Disruption based on HANA/S4HANA + New Intel Persistent Memory 3DXP

[pdf-embedder url=”https://imta-ovgu.de/wp-content/uploads/2017/06/HANA_on_Intel_Three-steps-to-reinvent-your-enterprise-as-a-digital-disrupter_By-Prof.-Dr.-Alexander-Zeier-Accenture-and-Edward-Goldman-with-3DXP-Intel-CTO-2.pdf” title=”HANA_on_Intel_Three steps to reinvent your enterprise as a digital disrupter_By Prof. Dr. Alexander Zeier, Accenture and Edward Goldman (with 3DXP), Intel CTO-2″]

Press Release by Accenture – New Exciting Times…

Dr. Alexander Zeier Joins Accenture as Global Lead for In-Memory Solutions

Will Also Serve as Director of Programs for SAP HANA® Within the Accenture Innovation Center for SAP® Solutions

MADRID; November 13, 2012 – Accenture (NYSE: ACN) today announced the appointment of Dr. Alexander Zeier as managing director of In-Memory Solutions. In this capacity, Zeier will work with Accenture clients to develop in-memory solutions, provide sales support to global industry teams in leveraging in-memory technology, and provide ongoing thought leadership. He will also serve as director of programs for the SAP HANA® platform within the Accenture Innovation Center for SAP® solutions.

Dr. Zeier has been working with SAP technologies and solutions for over 20 years. Prior to joining Accenture, he was responsible for SAP’s first large in-memory application, and was a pivotal part of the development of SAP HANA, SAP’s platform for real-time analytics and applications. He holds ten patents related to in-memory technology and applications for enterprise systems, and is co-author with Hasso Plattner of the recently published book, “In-Memory Data Management – Technology and Applications.”

“Alexander brings an incredible and unrivaled depth of expertise in the areas of analytics and in-memory technology,” said Christophe Mouille, global managing director of SAP business for Accenture. “Alexander possesses a truly unique understanding of the business value that can be unlocked through the power of in-memory computing. Our clients will benefit from his extensive experience in researching and developing in-memory technology that turns massive amounts of customer data into actionable insights.”

“Accenture has been committed to developing solutions based on in-memory technology for several years, and I am excited to be joining this team to drive further innovations in this important strategic area,” said Zeier. “Companies are relying on transactional and analytical data more than ever. The capabilities enabled by in-memory computing combined with Accenture’s vast industry knowledge will result in better, faster insights and new innovative business processes for our clients.”

Since March 2012, Dr. Zeier has been a Visiting Professor in Residence at the Massachusetts Institute of Technology (MIT), lecturing and conducting research around innovative enterprise applications and business process optimizations that leverage in-memory technology or SAP HANA. He was also deputy chair, Enterprise Platform and Integration Concepts, at the Hasso Plattner Institute in Germany, focusing on real-time, in-memory enterprise systems and RFID technology.

Dr. Zeier received an MBA from the University of Würzburg. He completed his studies in information technology at the Chemnitz University of Technology and gained his Ph.D. in Supply Chain Management Systems at the University of Erlangen-Nuremberg.

About Accenture
Accenture is a global management consulting, technology services and outsourcing company, with more than 257,000 people serving clients in more than 120 countries. Combining unparalleled experience, comprehensive capabilities across all industries and business functions, and extensive research on the world’s most successful companies, Accenture collaborates with clients to help them become high-performance businesses and governments. The company generated net revenues of US$27.9 billion for the fiscal year ended Aug. 31, 2012. Its home page is www.accenture.com.

Source: link

German Edition Early June of “In-Memory Data Management – An Inflection Point for Enterprise Applications”

German Edition Early June of “In-Memory Data Management – An Inflection Point for Enterprise Applications”

First German Version at all will be available in June by Gabler/Springer:

Based on the Demand from German Companies for a German Version for their Employees, the translation of the First Version of the Book was initiated. A Translation is alway a time-consuming process and needed to get high quality.

The German Version will be first published at the In-Memory Conference http://www.in-memory.cc/ early June.

 

 

Why In-Memory now?

Why In-Memory now?


Imagine you live in a major US city. Now, imagine that every time you want a glass of water, instead of getting it from the kitchen, you need to drive to the airport, get on a plane and fly to Germany and pick up your water there.

From the perspective of a modern CPU, accessing data which is in-memory is like getting water from the kitchen. Accessing a piece of data from the computer’s hard disk is like flying to Germany for your glass of water. In the past the prohibitive cost of main memory has made the flight to Germany necessary. The last few years, however, have seen a dramatic reduction in the cost per megabyte of main memory, finally making the glass of the water in the kitchen a cost effective and much more convenient option.

This orders-of-magnitude difference in access times has profound implications for all enterprise applications. Things that in the past were not even considered because they took so long, now become possible, allowing busi- nesses concrete insight into the workings of their company that previously were the subject of speculation and guess-work.
The in-memory revolution is not simply about putting data into memory and thus being able to work with it “faster”. We also show how the convergence of two other major trends in the IT industry:

a) the advent of multi-core CPUs and the necessity of exploiting this parallelism in software and
b) the stalling access latency for DRAM, requiring software to cleverly balance between CPU and memory activity; have to be harnessed to truly exploit the potential performance benefits.

Another key aspect of in-memory data management, is a change in the way data is stored in the underlying database. This is of particular relevance for the enterprise applications that are our focus. The power of in-memory data management is in connecting all these dots.

The revolutionary Power of an In-Memory Column-Oriented Database 
Our experience has shown us that many enterprise applications work with databases in a similar way. They process large numbers of rows during their execution, but crucially, only a small number of columns in a table might be of interest in a particular query. The columnar storage model allows only the required columns to be read while the rest of the table can be ignored. This is in contrast to the more traditional row-oriented model, where all columns of a table—even those that are not necessary for the result—must be accessed.
The columnar storage model also means that the elements of a given column are stored together. This makes the common enterprise operation of aggregation much faster than in a row-oriented model where the data from a given column is stored in amongst the other data in the row.

Scaling due Parallelization Across Multiple Cores and Machines
Single CPU cores are no longer getting any faster but the number of CPU cores is still expected to double every 18 months. This makes exploiting the parallel processing capabilities of multi-core CPUs of central importance to all future software development. As we saw above, in-memory columnar storage places all the data from a given column together in memory making it easy to assign one or more cores to process a single column. This is called vertical fragmentation.
Tables can also be split into sets of rows and distributed to different processors, in a process called horizontal fragmentation. This is particularly important as data volumes continue to grow and has been used with some success to achieve parallelism in data warehousing applications. Both these methods can be applied, not only across multiple cores in a single machine, but across multiple machines in a cluster or in a data center.

Using Compression for Performance and to Save Space in Main Memory
Data compression techniques exploit redundancy within data and knowledge about the data domain. Compression applies particularly well to columnar storage in an enterprise data management scenario, since all data within a column has the same data type and
in many cases there are few distinct values, for example in a country column or a status column.

In column stores, compression is used for two reasons: to save space and to increase performance. Efficient use of space is of particular importance to in-memory data management because, even though the cost of main memory has dropped considerably, it is still relatively expensive compared to disk. Due to the compression within the columns, the density of information in relation to the space consumed is increased. As a result more relevant information can be loaded for processing at a time thereby increasing performance. Fewer load actions are necessary in comparison to row storage, where even columns of no relevance to the query are loaded without being used.

Summary: Why In-Memory now? 
In-memory data management is not only a technology but a different way of thinking about software development: we must take fundamental hardware factors into account, such as access times to main memory versus disk and the potential parallelism that can be achieved with multi-core CPUs. Taking this new world of hardware into account, we must write software that explicitly makes the best possible use of it. On the positive side for developers of enterprise applications, this lays the technological foundations for a database layer tailored specifically to all these issues. On the negative side, however, the database will not take care of all the issues on its own. Developers must understand the underlying layers of soft- and hardware to best take advantage of the potential for performance gains.