Looking glass
Lens four: Morphing of the computing fabric
The boundaries of computing are expanding, pushing the edges of what’s possible for enterprises. The emerging computing environment not only provides the opportunity to tap into unprecedented data analysis and processing power, but also to structure computing architecture to better serve the needs of the business.
Scanning the signals
The computing landscape is changing to accommodate the future of the internet and all its users. No longer just centralized in cloud services, processing now occurs on the edge, in devices, across multiple clouds and in managed services. The future is potentially even more exciting, with the rise of quantum and biological computing, even DNA-based storage.
In the past, large-scale data processing was only needed by big enterprises. Since the advent of smartphones and the proliferation of IoT devices we’ve seen a massive increase in the amount of data produced. Analysis of data is no longer the domain of corporate data warehouses; data can be anywhere in the vast interconnected web of people, devices, cars, factories, and cities. With more data comes the requirement for more computing power.
Alongside changes in the location of data and computing, there’s a continuing evolution of computer architecture. The push to mobile has driven high efficiency chips and even designs that include “big/little” computing cores optimized for high performance and efficiency depending on workload. Signals of this shift include:
- The proliferation of devices capable of computing, like wearables, autonomous/smart cars or in-home “hubs”
- Application specific integrated circuits (ASICs) such as Google’s Tensor Processing Unit (TPU), which is designed specifically for neural network machine learning, becoming widely available
- Processor advancements for mobile devices, for example low-power chips such as Apple’s M1
- Development of practical applications for quantum computers. Examples are likely to include cryptography, medical research, and certain complex optimization problems such as those found in finance and supply chain management
The Opportunity
Making informed computing choices enables businesses to optimize IT costs as well as provide more responsive services to consumers. In the enterprise context, all deployment options are not equal.
Despite the easy availability of cloud computing, where your data actually lives and how you process it matters. Innovative network technologies can’t overcome fundamental physics; a data center halfway round the world will always have worse latency than one local to a region or even distributed to a home or workplace.
This means there can be significant cost and customer experience implications depending on where you choose to locate your data, how you move it around and how you compute with it. Selecting the most appropriate hardware, including chip type, size, and memory, will have a direct impact on the number of instances or virtual machines you need. Some use cases — healthcare, financial services, telecommunications and industrial IoT— require lower latency than can be obtained with a centralized platform, and therefore more local computing resources.
Regardless of how resources are structured, it’s important to remember they will be seen by end-customers as your responsibility. Consumers expect their connected devices to work, and if they can’t ring their doorbell or unlock their connected car due to a cloud provider’s downtime, they’ll blame the doorbell or car vendor — not the company providing the underlying computing.
What we’ve seen
Trends to watch: Top Three
Adopt
Edge computing. Autonomous vehicles, medical monitoring, smart homes and cities, and augmented reality all rely on powerful cloud-based computing and data storage, but need low latency to be safe and effective. Edge computing brings data storage and processing closer to devices rather than relying on a central location that may be thousands of miles away. Plan for more diverse and complex deployment scenarios. Consider the management, monitoring, and testing challenges associated with complex and remote architectures carefully.
Analyze
Digital twins. A digital twin is a virtual model of a process, product or service that allows both simulation and data analysis. 3D visualization can be used together with live data so you can understand what’s happening to pieces of equipment you can’t actually see. For example, GE’s jet engines contain around two dozen physical sensors, but their digital twins compute several hundred virtual sensors, improving maintenance, safety and efficiency. If this concept fits your use case, the efficiency gains can be enormous.
Anticipate
Neuromorphic chips. Neuromorphic chips are made up of artificial neurons and synapses that replicate the way the brain works, handling processing entirely in the chip. They use significantly less energy because, like the human brain, they don’t require the processor to be idle as data moves to and from memory. They also exploit parallelism to a much greater extent than even GPUs and other specialized systems. This computing strategy could result in both faster processing and significant energy savings.
Trends to watch: The complete matrix
- Human-machine collaboration
- AI as a service
- Edge computing
- Polycloud
- Smart systems and ecosystems
- Data platforms & real-time analytics
- Industrial IoT platforms
- Smart contracts
- Digital twin
- Online machine learning
- Wearables
- Blockchain technologies
- Ubiquitous connectivity
- P2P technologies
- Cloud portability
- Fog computing
- Modern AuthZ
- Digital ecosystems
- Intelligent M2M collaboration
- Ambient computing
- Smart cities
- Autonomous vehicles
- Satellite networks
- Autonomous drones / drone as a platform
- Production immune systems
- Quantum computing
- Precision “X”
- 5G
- Data locality
- Nanotechnology
- Neuromorphic chips
- DNA data storage
- End of Moore’s Law
- Private IoT PaaS platform
Advice for adopters
- Evaluate the full range of hardware options for the deployment of your software. Evaluate the full range of hardware options for the deployment of your software, and be open to using a non-obvious choice. While cloud platforms make it easy to provision servers, the hardware configuration of those servers can and should be tuned to the applications running on them.
- Invest in software architecture patterns that allow components to be independently deployable. Invest in software architecture patterns that allow components to be independently deployable, even if you won’t be deploying them in separate clusters or data centers initially. This means including decentralized authentication, authorization, and data. Doing so will allow you to move services to edge computing as needed to support your system’s evolution.
- When using distributed computing, carefully measure your network costs. When using distributed computing, carefully measure your network costs to identify services which could benefit from being moved closer to their users. Be sure to include the increased cost of maintenance in this calculation.
- Invest in improving your distributed systems capabilities. Most organizations default to centralized or monolithic applications, and the skills to build modern systems are sometimes lacking.
By 2022, businesses will…
… realize computing is no longer confined to certain machines or locations, or subject to centralization or the old constraints. With more choice comes the ability to set up systems and devices so they contribute directly to the responsiveness of the organization, and bring services closer to customers so they can be delivered at speed.