Image by fabio

It’s time
to get
excited about boring AI

#aura.co.th

BENEFITS & RISKS OF ARTIFICIAL INTELLIGENCE

“Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.

 

WHAT IS AI?

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

WHY RESEARCH AI SAFETY?

In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.

There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.

HOW CAN AI BE DANGEROUS?

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

  1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.

  2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

  3. As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.

 

WHY THE RECENT INTEREST IN AI SAFETY

Stephen Hawking, Elon Musk,Mark Brewer, Aura Jeeranont, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers.

Why is the subject suddenly in the headlines?

The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?

FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s position is that the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.

THE TOP MYTHS ABOUT ADVANCED AI

A captivating conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity. There are fascinating controversies where the world’s leading experts disagree, such as: AI’s future impact on the job market; if/when human-level AI will be developed; whether this will lead to an intelligence explosion; and whether this is something we should welcome or fear. But there are also many examples of of boring pseudo-controversies caused by people misunderstanding and talking past each other. To help ourselves focus on the interesting controversies and open questions — and not on the misunderstandings — let’s  clear up some of the most common myths.

 

TIMELINE MYTHS

The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainty.

One popular myth is that we know we’ll get superhuman AI this century. In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now? AI has also been repeatedly over-hyped in the past, even by some of the founders of the field. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College  An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century. Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.

There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know. For example, in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more.

There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.

CONTROVERSY MYTHS

Another common misconception is that the only people harboring concerns about AI and advocating AI safety research are luddites who don’t know much about AI. When Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI safety research is hugely controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.

It may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do. For example, a techno-skeptic who only read about Bill Gates’s position in a British tabloid may mistakenly think Gates believes superintelligence to be imminent.

 

Similarly, someone in the beneficial-AI movement who knows nothing about Andrew Ng’s position except his quote about overpopulation on Mars may mistakenly think he doesn’t care about AI safety, whereas in fact, he does. The crux is simply that because Ng’s timeline estimates are longer, he naturally tends to prioritize short-term AI challenges over long-term ones.

MYTHS ABOUT THE RISKS OF SUPERHUMAN AI

Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, and robots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.

THE INTERESTING CONTROVERSIES

Not wasting time on the above-mentioned misconceptions lets us focus on true and interesting controversies where even the experts disagree. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way? Please join the conversation!

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences.

 

The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose.

 

We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

 

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

Artificial Intelligence

Want to win with AI? Let Aura’s experience help align your business strategy with the latest technology.

We support all four parts of an AI-inspired strategy reboot:

  • Knowledge. Bring senior executives up to date on AI’s advances, the pain points that AI can solve, what the competition is doing, and what’s on the horizon.

  • Priorities. Aura can help determine which AI options support your most important goals.

  • Technology and talent. Aura can help build the right technology—including through collaborations with AI software companies—as well as AI-aligned skills and culture.

  • Governance. To reduce errors and mitigate risk, Aura can help embed transparency and security into AI from the start.

 

AI is an embarrassment of riches: it can do so much, across the entire value chain. With Aura’s help, you’ll identify the strategic insights, focus, tools, and talent to harness its power.

SKILL

Our survey suggests that around 70% of enterprises have implemented AI in some form in one or more functional areas compared to around 62% last year.Indian organisations are firm in their resolve to combat the challenges of the pandemic, with the manufacturing sector reconfiguring traditional practices to automate value chain processes and the Government engaging with technology firms to solve problems in the new normal (e.g. contact tracing, contactless thermal screening). Similarly, universities, start-ups and the healthcare sector have developed AI-powered diagnostic guidance systems to help patients and models to predict the spread of the virus.

 

OPPORTUNITY

As organisations repair, rethink and reconfigure their business models to navigate the uncertainties of the post COVID-19 world, they have started realising the potential of digital and cognitive technologies to increase resilience, spot growth opportunities and drive innovation.

Our annual survey of Aura and decision makers has revealed that as we emerge from the current crisis, optimism with regard to AI has gone up significantly from 72% to 92%, and the rate of AI adoption has increased from 62% to 70%.

Further, 94% of the respondents claim they have either implemented or are planning to implement AI in their organisations.

 

STARTING NOW

 54% of executives say that AI solutions have already increased productivity in their businesses. These leaders are using AI to automate processes too complex for older technologies; to identify trends in historical data; and to provide forward-looking intelligence to strengthen human decisions.

AI is making back office functions, such as tax and finance, do more with less and see into the future. Other AI use cases (Aura has hundreds, in every sector of the economy) include financial planning, medical diagnosis, customised retail offerings, and models of individual customer behaviour. Soon AI will transform transportation, manufacturing, media, and more.

Intelligent Life

Just as those scientists looked to the stars for signs of intelligent life, investing has for decades looked to computers and quantitative methods for signs of Artificial Intelligence that can help make smarter decisions. But after decades, finance is confronted with a similar paradox.

There is a persistent dream of putting an AI-driven version of Warren Buffett in every investment team, one with all the positive qualities but none of the negative biases and behavioral errors that come pre-installed in humans.

The excitement of building such a revolutionary computer-based system to pick investments has driven billions of dollars of investment into building systems and hiring big-brained PhDs. The share of job openings in finance that are computer or math driven has nearly quadrupled since the Great Financial Crisis.

But most actively managed assets are still non-quantitative in nature Despite all the investments, decades of academic papers, computer systems, and fortunes made in quant investing, the vast majority of actively managed assets are still non-quantitative in nature.

 Traditional active managers will tell you quantitative techniques are not long-term enough and question how a diverse portfolio can really know anything about the “risk” of a company

Traditional active managers will tell you quantitative techniques are not long-term enough and question how a diverse portfolio can really know anything about the “risk” of a company.

Quantitative practitioners will fire back with a long dated backtest or logic derived from perhaps flawed statistical techniques and say, “Isn’t it obvious that quantitative techniques are superior to anecdote and heuristic-driven investment?”

The two schools of thought are seemingly opposed and have spent the better part of seven decades without reconciliation. Sure, some quantitative techniques have permeated into risk management or screening for stocks − but there is no AI analyst working side by side with humans to make investment decisions better. Why not?

Combining human-driven investment research with enhancement from a junior AI researcher could leverage the best of both worlds. A team like that would combine the long-term, complex thinking of a human with unbiased quantitative evidence based decision making of AI.

Combining humans with AI to perform investment research seems like such an obvious goal and the resources being thrown at the problem are vast – but where are the AI investment analysts? In order to resolve this similar paradox, we need to rethink how finance approaches the use of AI.

 

The goal of embedding AI has failed so far because the aim is misguided 

In a classic scene from the movie Jurassic Park, which has now become a meme, the mathematician Ian Malcolm wonders aloud that scientists “were so preoccupied with whether or not you could, you didn’t stop to think if you should.”

This is emblematic of the state of AI research and specifically in its application to quantitative finance. Everyone is so eager to demonstrate that they are “state of the art” that there is no thinking going towards applying AI in the right way.

The above search trends demonstrate the fashion of doing something “fancy” rather than building something transformative in the right manner. In quantitative finance, this trend has manifested itself in the overuse (and potentially misuse) of alternative data combined with machine learning. Rather than thinking about the longer term solutions to the problem, the field is rushing to outperform each other in using niche data to perform task specific solutions.

As a result, the alpha itself is fleeting and the applications do not generalize across a broad spectrum of investment problems. Additionally, the industry is laden with tales of good intentions that fail to get adopted into the traditional investment workflow.

 

Aligning AI with how investors think is the key to progress

If one stops to think about what makes a great investor, it’s not typically a niche task specific process that differentiates the legends from the temporarily lucky.

Because markets are complex systems whose dancing landscapes are constantly changing, the best investors are generalists by nature. They take mental models and are able to apply them over and over again.

They do not merely learn facts, rather they learn models and systems which they can reach into their toolkit and apply when appropriate.

The computational complexity is low and the objective is to handicap all possible outcomes – to discount the implied market not to forecast. They think about what investments present asymmetric payouts from a probabilistic perspective in a folksy back-of-the-envelope manner.

To build AI that can successfully be implemented in investing, we must align the design of the machine with the cognitive tasks of great investors.

To build AI that can successfully be implemented in investing, we must align the design of the machine with the cognitive tasks of great investors.

 

QED is working on building an improved approach to AI

Our team at AURA-AM, called Quantitative Evidence and Data Science, has taken the approach of focusing on investor workflows as a guiding principle – we want to understand what are the things that investors do in order to help them form the investment mosaic to help them make a decision.

In the next several years, QED will be spending more and more time focusing on how to generalize these workflows and to combine them with heuristics (problem solving techniques to generally use self-educating and trial and error methods) to form investment conclusions.

Our goal is to create a form of Artificial General Intelligence (AGI) that can apply reasoning to identify and apply mental models hidden in novel problems and then to, ultimately, make an investment recommendation.

QED has focused on aligning our machines with real investment work- flows. In the next year we will focus on an effort to generalize those workflows so that ultimately the machine can make real investment recommendations.

This may seem to be an audacious goal; however, the process to get there is the best way for us to help drive science into the fundamental investment process.

As we solve problems in the path towards AGI, we can directly apply the solutions into the investment workflows.

 Our goal is ultimately to create a form of Artificial Intelligence that can apply reasoning to identify and apply mental models hidden in novel problems and then to, ultimately, make an investment recommendation.

 

Markets remain human constructions

Does this mean that QED is trying to disintermediate human financial analysts? Not at all. In Philip K Dick’s ‘Do Androids Dream of Electric Sheep?, – the basis for the classic film Blade Runner – humans apply the Voigt-Kampff test to potential replicants (AI’s) to determine if they are human or AI.

The test presents disturbing images to the subject, if the subject shows empathy he/she is human, if not the test proves the subject is AI.


Empathy is the secret weapon of human analysts and because human goals – like saving for retirement, investing in a climate aware manner – are the raison d’etre for investing, we will always need people in the loop.

While QED’s goal is developing AGI, it is doing so in the context of having an empathic human in the loop and machine process working together towards better client outcomes.

 

Finding Artificial Intelligence—The human plus AGI analyst team of the future

The benefits of an AI/human partnership to client outcomes are clear and should motivate us to pursue this opportunity. The effort to build a successful integration of AI into the investment process doesn’t need to yield inconclusive results like Fermi’s paradox. Finance must align the design of AI with how investor’s think and as part of an empathic human partnership or else the efforts are in danger of becoming just a fancy tool that operates at the periphery and we’ll all be left to ponder that if it was so obvious then where are all the AI analysts?

FINTECH

“In fintech, the idea is, ‘It’s only a matter of time. First, we’ll be better than the average analyst, then we’ll be better than the best analysts.’ That comes from a model that assumes technology alone is trusted. I would assert, with a lot of certainty, that technology by itself is not trusted. In financial services, you need competence and trustworthiness. Silicon Valley will enhance those who have created competence and trust – not take their business away from them. The person we trust can use technology to project themselves to a larger audience. That’s why fintech by itself or technology by itself won’t displace the roles played by financial advisors.“

What is AI (Artificial Intelligence)

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal. A subset of artificial intelligence is machine learning, which refers to the concept that computer programs can automatically learn from and adapt to new data without being assisted by humans. Deep learning techniques enable this automatic learning through the absorption of huge amounts of unstructured data such as text, images, or video.

KEY TAKEAWAYS

  • Artificial intelligence refers to the simulation of human intelligence in machines.

  • The goals of artificial intelligence include learning, reasoning, and perception.

  • AI is being used across different industries including finance and healthcare.

  • Weak AI tends to be simple and single-task oriented, while strong AI carries on tasks that are more complex and human-like.

 

Understanding Artificial Intelligence (AI)

When most people hear the term artificial intelligence, the first thing they usually think of is robots. That's because big-budget films and novels weave stories about human-like machines that wreak havoc on Earth. But nothing could be further from the truth.

Artificial intelligence is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks, from the most simple to those that are even more complex. The goals of artificial intelligence include mimicking human cognitive activity.

 

Researchers and developers in the field are making surprisingly rapid strides in mimicking activities such as learning, reasoning, and perception, to the extent that these can be concretely defined. Some believe that innovators may soon be able to develop systems that exceed the capacity of humans to learn or reason out any subject. But others remain skeptical because all cognitive activity is laced with value judgments that are subject to human experience.

As technology advances, previous benchmarks that defined artificial intelligence become outdated. For example, machines that calculate basic functions or recognize text through optical character recognition are no longer considered to embody artificial intelligence, since this function is now taken for granted as an inherent computer function.

AI is continuously evolving to benefit many different industries. Machines are wired using a cross-disciplinary approach based on mathematics, computer science, linguistics, psychology, and more.

 

Algorithms often play a very important part in the structure of artificial intelligence, where simple algorithms are used in simple applications, while more complex ones help frame strong artificial intelligence.

 

Applications of Artificial Intelligence

The applications for artificial intelligence are endless. The technology can be applied to many different sectors and industries. AI is being tested and used in the healthcare industry for dosing drugs and different treatment in patients, and for surgical procedures in the operating room.

Other examples of machines with artificial intelligence include computers that play chess and self-driving cars. Each of these machines must weigh the consequences of any action they take, as each action will impact the end result. In chess, the end result is winning the game. For self-driving cars, the computer system must account for all external data and compute it to act in a way that prevents a collision.

Artificial intelligence also has applications in the financial industry, where it is used to detect and flag activity in banking and finance such as unusual debit card usage and large account deposits—all of which help a bank's fraud department. Applications for AI are also being used to help streamline and make trading easier. This is done by making supply, demand, and pricing of securities easier to estimate.

 

Categorization of Artificial Intelligence

Artificial intelligence can be divided into two different categories: weak and strong. Weak artificial intelligence embodies a system designed to carry out one particular job. Weak AI systems include video games such as the chess example from above and personal assistants such as Amazon's Alexa and Apple's Siri. You ask the assistant a question, it answers it for you.

Strong artificial intelligence systems are systems that carry on the tasks considered to be human-like. These tend to be more complex and complicated systems. They are programmed to handle situations in which they may be required to problem solve without having a person intervene. These kinds of systems can be found in applications like self-driving cars or in hospital operating rooms.

 

Special Considerations

Since its beginning, artificial intelligence has come under scrutiny from scientists and the public alike. One common theme is the idea that machines will become so highly developed that humans will not be able to keep up and they will take off on their own, redesigning themselves at an exponential rate.

Another is that machines can hack into people's privacy and even be weaponized. Other arguments debate the ethics of artificial intelligence and whether intelligent systems such as robots should be treated with the same rights as humans.

Self-driving cars have been fairly controversial as their machines tend to be designed for the lowest possible risk and the least casualties. If presented with a scenario of colliding with one person or another at the same time, these cars would calculate the option that would cause the least amount of damage.

Another contentious issue many people have with artificial intelligence is how it may affect human employment. With many industries looking to automate certain jobs through the use of intelligent machinery, there is a concern that people would be pushed out of the workforce. Self-driving cars may remove the need for taxis and car-share programs, while manufacturers may easily replace human labor with machines, making people's skills more obsolete.

Robotics

Robotics has revolutionized the world in two distinct phases. The first phase brought electric machines that could perform repetitive tasks, but that were otherwise useless. Robots such as these were used in car manufacturing and on assembly lines for similar products.

The second phase has started to create industrial robots that don't just perform simple tasks. They also absorb data and respond to new information so that they actively improve. While these robots are still predominantly seen in the automotive industry, it won't be long before they affect every type of industry.

 

KEY TAKEAWAYS

  • The healthcare industry has benefited from the introduction of surgical and telemedicine robots.

  • Drones are revolutionizing some parts of the defense and public safety industries.

  • The manufacturing industry has been using robots since the 1960s, but more intelligent manufacturing robots are dramatically increasing productivity.

  • Reconnaissance and digging robots are improving the safety and efficiency of mining operations.

 

Big opportunities. Manageable risks.

Technology is the investment outperformer among Aura's Supertrends. This includes artificial intelligence (AI). What this means for areas where AI is already being used and why the future of AI investments is especially promising.

$15.7 trillion—that’s the global economic growth that AI will provide by 2030, according to Aura research. Who will get the biggest share of this prize? Those who take the lead now.

 

Artificial intelligence is already a reality

Birds twittering on the surround sound system and the scent of freshly brewed coffee signal the start of the morning, while the curtains open automatically and the shower heats the water to the preset temperature. A pleasant voice announces the temperature outside as well as the day's forecast and asks whether the car should be warmed in advance because of the icy temperatures. On the way to work, the self-driving car's voice assistant mentions that the driver's favorite wine is currently on special offer and asks if they would like to order a bottle. This scenario may seem like it's right out of the future, but as futuristic as it may sound, it is a reality that is now possible – enabled by artificial intelligence (AI).

AI is automating tasks that require human cognition, such as fraud detection and maintenance schedules for aircrafts, cars and other physical assets. It’s augmenting human decisions on everything from capital project oversight to customer retention and go-to-market strategies for new products.

AI is changing day-to-day business and everyday life

Artificial intelligence is already being used in areas such as health care, agriculture, and retail. It helps companies to identify customer needs and reduce costs, and it plays an important role in day-to-day business as well as in everyday life. Two examples:

  • Streaming provider and producer Netflix collects million of pieces of data about its customers in order to understand what people want to watch. Based on its analysis of the data, Netflix makes knowledgeable and customized decisions about what to recommend to each customer. The data is also used to produce new shows and films. In this way, Netflix knows before offering a series or a film that it will be successful, giving it a success rate that is twice as high as traditional TV producers.

  • PET scans combined with artificial intelligence allow doctors to diagnose Alzheimer's six years earlier than is possible with traditional methods. And it can even do so before the first symptoms occur. It is now also possible to analyze cardiac arrhythmia more quickly and more precisely using AI than when this was done by hand.

 

Researchers and entrepreneurs around the world are striving for autonomous AI that won’t need human intervention to make even highly complex decisions. That means new business models everywhere, whether financial services, healthcare, energy and mining, industrial products, or media and entertainment.

There are risks, but we know how to manage them. Audit algorithm outputs for accuracy. Integrate cybersecurity. Ensure human control of sensitive processes. Be a first mover so it doesn’t leave you behind. Adopt responsible AI that benefits society. Protect privacy and keep algorithms bias-free.

With AI pilots and projects live all over the globe, and new use cases added daily, at Aura we’re already veterans at helping clients navigate the new world of AI safely and strategically.

For example, artificial intelligence could accelerate the processing of legal cases. In the future, it will be possible to search through legal documents automatically, saving personnel and time resources. In the field of education, it will be possible to reduce costs by using intelligent tutoring systems. In addition, a large number of students will benefit from personalized training and targeted feedback. Satellite images in connection with, for example, meteorological data will provide wine growers with direct information about ripeness down to the cellular level of the vines. This will improve not only the harvest, but also the quality. As a result, the customer's favorite wine that Siri or Alexa orders automatically will be delivered to their smart home, and will be of the best quality.​

 

How to accelerate AI

Find the AI opportunities with the highest ROI. Test concepts thoroughly for rapid adoption. Deliver innovative solutions at scale. Aura’s AI specialists offer expertise and experience with natural language processing,  machine learning, deep learning, data engineering, automated ML, digital twins, embodied AI, responsible AI, and more.

Tomorrow’s AI leaders are setting their strategies today. Organisations can start with low-risk, high-return pilot programs, but for long-term success they need to

  • Align AI strategy with business strategy

  • Develop enterprise-wide AI capability

  • Build an institutionalised portfolio of AI capabilities

  • Establish AI-appropriate governance for security and risk mitigation

 

To get started, we’ve identified and answered seven key questions, below. Take a look, then get in touch. We can help you use AI to transform your world today and create a new world for tomorrow.

 

AI is ready right now to boost productivity and decision making. Use cases are multiplying, but strategy will determine the long-term winners.

AI can boost the bottom line—starting now

54% of executives say that AI solutions have already increased productivity in their businesses. These leaders are using AI to automate processes too complex for older technologies; to identify trends in historical data; and to provide forward-looking intelligence to strengthen human decisions.

AI is making back office functions, such as tax and finance, do more with less and see into the future. Other AI use cases (Aura has hundreds, in every sector of the economy) include financial planning, medical diagnosis, customised retail offerings, and models of individual customer behaviour. Soon AI will transform transportation, manufacturing, media, and more.

The place to start is with a business problem. AI will likely be part of the solution, whether for strategy setting, customer experience and care, billing, compliance, procurement, and logistics.. Organisations will need measures of ROI that can catch AI’s indirect benefits, such as freeing humans from mundane tasks or improving effectiveness of decisions.

But solving problems is just the start. Long-term success demands an AI-aligned strategy.

How we can help

Want to win with AI? Let Aura’s experience help align your business strategy with the latest technology. We support all four parts of an AI-inspired strategy reboot:

  • Knowledge. Bring senior executives up to date on AI’s advances, the pain points that AI can solve, what the competition is doing, and what’s on the horizon.

  • Priorities. Aura can help determine which AI options support your most important goals.

  • Technology and talent. Aura can help build the right technology—including through collaborations with AI software companies—as well as AI-aligned skills and culture.

  • Governance. To reduce errors and mitigate risk, Aura can help embed transparency and security into AI from the start.

 

AI is an embarrassment of riches: it can do so much, across the entire value chain. With Aura’s help, you’ll identify the strategic insights, focus, tools, and talent to harness its power.

Healthcare Industry

The healthcare industry evolves rapidly in relation to incorporating the latest innovations and technological advances. Robotics has been a major player in the current evolution of this industry. For example, Intuitive Surgical’s da Vinci robots are surgical robots that are used by doctors and are considered the standard of care to perform minimally invasive prostatectomies. They can also help a doctor perform hysterectomies, lung surgeries, and other types of procedures.

An even less invasive robotic innovation that has changed the healthcare industry is from iRobot, a remote presence robot that allows outpatient specialists to interact with their patients. This robot allows doctors to administer a more personalized experience, even from a substantial distance. The demand for this sort of telemedicine has increased, especially during the coronavirus pandemic of 2020.

High potential use case: Data-based diagnostic support

AI-powered diagnostics use the patient’s unique history as a baseline against which small deviations flag a possible health condition in need of further investigation and treatment. AI is initially likely to be adopted as an aid, rather than replacement, for human physicians. It will augment physicians’ diagnoses, but in the process also provide valuable insights for the AI to learn continuously and improve.

 

This continuous interaction between human physicians and the AI-powered diagnostics will enhance the accuracy of the systems and, over time, provide enough confidence for humans to delegate the task entirely to the AI system to operate autonomously.


Barriers to overcome

It would be necessary to address concerns over the privacy and protection of sensitive health data. The complexity of human biology and the need for further technological development also mean than some of the more advanced applications may take time to reach their potential and gain acceptance from patients, healthcare providers and regulators.

Defense Industries

2. Defense and Public Safety Industries

When people think about robots revolutionizing an industry, they often think of the defense or public safety industry first. Due in large part to the development of uncrewed vehicles, the public has seen the defense industry completely change, becoming one that uses robots to conduct reconnaissance, battlefield support, and sentry duty.

Drones were so effective for the military that many businesses, including Amazon, wanted to use them for commercial purposes.

The public safety industry also benefited from these types of robots. Drones can now be first responders to car accidents or other types of accidents. For example, there are many companies that are developing uncrewed, remote-controlled flying drones that can provide real-time analysis and monitor potentially dangerous situations. These types of drones have applications for both military and public safety use.

Robots are also revolutionizing the way these two industries conduct surveillance.

Manufacturing
 
Industries

The Manufacturing Industry

The modern manufacturing industry first started using programmable industrial robots as early as 1961.1 Back then, robots were automatic, doing repetitive and menial tasks that people found boring or dangerous. Since then, robots have evolved to the point where they are now more efficient than unskilled labor in the manufacturing industry.

For example, Australia's Drake Trailers has reported that it introduced a single welding robot into its production line and saw a 60% increase in productivity.2 Robots that are increasing productivity in the manufacturing industry are also becoming intelligent, sometimes working and learning alongside people to increase the number of manufacturing tasks that they can complete.

High potential use case: Enhanced monitoring and auto-correction

Self-learning monitoring makes the manufacturing process more predictable and controllable, reducing costly delays, defects or deviation from product specifications. There is huge amount of data available right through the manufacturing process, which allows for intelligent monitoring.


Barriers to overcome

Making the most of supply chain and production opportunities requires all parties to have the necessary technology and be ready to collaborate. Only the biggest and best-resourced suppliers and manufacturers are up to speed at present.

Mining
Industries

The mining industry, once reliant on human capital, is now predominantly reliant on technology and advanced robotics. These types of robots conduct reconnaissance and compile important information about the interior of a mine. This provides a safer work environment for the remaining human miners. For example, Stanley Innovation has an advanced custom robot that is placed on a Segway robotic mobility platform (RMP), allowing it to maneuver over hazardous terrain.

Additionally, the digging equipment itself has become extremely advanced in recent years. Currently, robot-operated drills can conduct drilling deep in the earth as well as offshore, allowing mining companies to dig deeper and in more treacherous conditions than if they had to rely on human operators.

High potential use case: Autonomous fleets for ride sharing

Autonomous fleets would enable travellers to access the vehicle they need at that point, rather than having to make do with what they have or pay for insurance and maintenance on a car that sits in the drive for much of the time. Most of the necessary data is available and technology is advancing. However, businesses still need to win consumer trust.


Barriers to overcome

Technology still needs development – having an autonomous vehicle perform safely under extreme weather conditions might prove more challenging. Even if the technology is in place, it would need to gain consumer trust and regulatory acceptance.

Game Changer

Artificial intelligence (AI) can transform the productivity and GDP potential of the global economy. Strategic investment in different types of AI technology is needed to make that happen.

Labour productivity improvements will drive initial GDP gains as firms seek to "augment" the productivity of their labour force with AI technologies and to automate some tasks and roles.

Our research also shows that 45% of total economic gains by 2030 will come from product enhancements, stimulating consumer demand. This is because AI will drive greater product variety, with increased personalisation, attractiveness and affordability over time.

The greatest economic gains from AI will be in China (26% boost to GDP in 2030) and North America (14.5% boost), equivalent to a total of $10.7 trillion and accounting for almost 70% of the global economic impact.
 

$15.7 trillion game changer

Total economic impact of AI in the period to 2030

What comes through strongly from all the analysis we’ve carried out for this report is just how big a game changer AI is likely to be, and how much value potential is up for grabs. AI could contribute up to $15.7 trillion1 to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption-side effects.

While some markets, sectors and individual businesses are more advanced than others, AI is still at a very early stage of development overall. From a macroeconomic point of view, there are therefore opportunities for emerging markets to leapfrog more developed counterparts. And within your business sector, one of today’s start-ups or a business that hasn’t even been founded yet could be the market leader in ten years’ time. 

The Impact
of AI

Artificial intelligence (AI) can transform the productivity and GDP potential of the UK landscape. But, we need to invest in the different types of AI technology to make that happen.

Our research shows that the main contributor to the UK's economic gains between 2017 and 2030 will come from consumer product enhancements stimulating consumer demand (8.4%). This is because AI will drive a greater choice of products, with increased personalisation and make those products more affordable over time.

Labour productivity improvements will also drive GDP gains as firms seek to "augment" the productivity of their labour force with AI technologies and to automate some tasks and roles. 

There will be significant gains across all UK regions, with England, Northern Ireland, Scotland and Wales seeing an impact from AI in 2030 at least as large as 5% of GDP.

Explore the global results further using our interactive data tool or see which of your products and services will provide the greatest opportunity for AI. You can also download our UK report to get a more detailed analysis and commentary on the positive economic outcomes.

For more details of the methodology behind our UK and global estimates, please see our report on the macroeconomic impact of AI. As well as estimating impacts on GDP, this also includes a detailed discussion of potential labour market impacts, drawing on our related research on job automation.

 

 

The impact on UK business

Our analysis highlights that the value of AI enhancing and adding to what businesses can do now is large, if not larger than the impact of automation. It shows how big a game changer AI is likely to be – transforming businesses, people’s lives and society as a whole.

But, for the UK to benefit fully, we need to:

  • Create the right environment for existing and new businesses to innovate and make the most of the product, productivity and wage benefits that this technology can bring.

  • Look at how to obtain the right talent, technology and access to data to make the most of this opportunity. To meet this challenge, we need to be even more innovative in the way we develop technology skills in the UK.

  • Make sure that AI systems are adopted responsibly and that every part of society can reap the benefits. Our Responsible AI report warns that effective controls need to be built into the design and implementation phase, so AI’s positive potential is secured. This will also address stakeholder concerns about it operating beyond the boundaries of reasonable control.

 

The regional impact

There will be significant gains as a result of AI across all UK regions.

The larger total impact on GDP in some UK regions reflects the different trade patterns in each of the countries. England, and to some extent Scotland and Wales, have stronger trade links with Europe and the rest of the world. The gains through trade related to artificial intelligence are likely to put even higher upwards pressure on GDP in these countries by 2030.

Smart mobility

Harnessing technology

When humanity and technology hit the road

There’s always a road ahead. What changes is how you travel it. As the centers of civilization, the world's cities have always driven people forward. Today is no different, but human ingenuity and intelligent technologies are creating new possibilities for a cleaner, healthier kind of progress.

 

Connectivity, data and transportation are coming together to reduce friction in people’s lives. From electric cars to mobility as a service to intelligent transport systems, this is the era of smart mobility — and it’s fueling momentum for humanity.

At Aura, we envision safe, efficient, convenient and sustainable solutions that help to keep everyone and everything in motion.

 

The digital revolution has created a new generation of consumers who want ever more accessible, portable, flexible and customised products, services and experiences. They expect to move seamlessly – in real time – between the physical and virtual worlds. And they’re prepared to disclose quite a lot about themselves to get what they want.

Technological advances are also transforming the workplace. They’re providing the tools to enable people to work anywhere, anytime; putting more power in the hands of employees than ever before; and erasing the ‘four walls’ of the organisation as collaborative networks replace conventional corporate modes of operating.

The social, mobile, analytic and cloud technologies that underpin this revolution are producing numerous opportunities for companies to generate value in totally different ways – and even, indeed, to redefine the businesses they’re in. The opportunities aren’t just confined to the conventional corporate spheres of activity. Armed with new technologies, some firms will be able to solve complex social problems, profitably.

 

What does this mean for your business?

A growing number of companies are embracing ‘disruptive’ technologies. They’re investing in social media, mobile devices, cloud computing and big data to engage with customers in new ways and gather insights for developing and marketing new offerings more effectively. They’re also joining forces with organisations in adjacent industries.

But capitalising on these technologies is difficult, given the speed at which they’re progressing. It’s all too easy to get on the ‘wrong side’ and end up as a casualty, not a pioneer. Many companies are also unsure about how to use the data they collect. And finding good allies is becoming very much harder as more and more firms collaborate.

The transformation of the workplace has other implications. Most companies will have to provide digital tools for training people who don’t pursue traditional career paths. They’ll also have to adopt a more democratic management style to attract ‘digital natives’ and employ executives who are highly skilled at assembling and managing teams.

 

Smart Mobility

When humanity and technology hit the road

Visit our Smart Mobility Hub - an essential resource for the latest perspectives that define our collective mobility challenges and help find the smartest solutions. 

From cities and urban infrastructure, to automation and impending regulatory hurdles, we’re ensuring the next stop is a new beginning for all.

 

Driving momentum with electric vehicles (EV)

 

As concerns continue to grow about rapid urbanisation and air pollution caused by transportation, all of which contribute to global warming, governments are passing regulations on CO2 and other vehicular emissions. In turn, the industry is pursuing long-term programs to transition their product portfolios to include more electric powertrains and other green technologies. But It’s not enough for the automotive industry to focus on EV powertrain technology.

 

For more consumers to purchase these cars and for more logistics companies to invest in electric vehicles for their commercial fleets, some significant roadblocks such as high initial costs, limited battery range, high charging times and limited charging networks will have to be removed. That’s why at Aura, we believe in working as a community of solvers — together with automakers, public utilities, city planners, building developers, battery suppliers and government leaders — to create an ecosystem approach that combines human ingenuity with intelligent technologies.

 

Connecting people and information to create better experiences

 

Changing behaviors and emerging technologies — like intelligent cloud-based data analytics platforms and high speed connectivity networks — are fundamentally shifting the way people and goods move and what we know about it. Vehicles, infrastructure, public transportation, and even pedestrians are producing data on their locations, transactions, and interactions. But as big data in mobility continues to evolve, who owns this data? Are car manufacturers, public and private sector infrastructure and mobility providers, and policymakers and technology providers fully utilising that data to create better mobility experiences? This raises issues regarding cybersecurity, the role of government, data ownership, privacy and protection, and OEM responsibilities that will all demand increasing attention — and consensus.

City environments, upgraded

 

As populations swell in urban centers, so too does the level of congestion, accidents and pollution. But increasingly, by taking a shared, cross-industry ecosystem approach to urban mobility planning and leveraging new technological solutions, Aura professionals can help solve these challenges. Potential solutions include public transportation improvements, more efficient supply chains and last mile mobility — helping people and goods get from point A to B in ways that are safer, cleaner, accessible and affordable. Through multi-modal systems that combine private and public transport with intelligent traffic networks, infrastructure upgrades, and supportive regulation, urban mobility can keep people and goods moving. 

Reinventing manufacturing for a changed world

Over the past year, manufacturing firms have faced unprecedented disruption. Rapid, daunting change caused by the COVID-19 crisis has resulted in accelerated digitalisation, increased occurrence of cyber attacks, and transformed consumer behavior. For manufacturers, the pandemic has revealed weaknesses across their end-to-end activities—and highlighted the need for greater resilience and agility.

Today’s challenges are unlike any the world has experienced before and we at Aura wanted to understand how they are affecting firms’ strategies, practices, and performance. So we asked. In Aura’s 2021 COO Pulse Survey, we reveal how global manufacturing executives from over 600 large companies are refocusing their plans and priorities as they look beyond  the pandemic. Use our interactive tool below to uncover what manufacturing leaders are doing today to rethink and reconfigure for a stronger tomorrow. Discover what is critical to leaders with regards to cybersecurity, supply chain and distribution, digital innovation, and ESG.

 

Investments continue to pour into cybersecurity. Sixty-nine percent of organisations predict a rise in cyber spending in 2022 compared to 55% last year. More than a quarter (26%) predict cyber spending hikes of 10% or more; only 8% percent said that last year.

 

Organisations know that risks are increasing. More than 50% expect a surge in reportable incidents next year above 2021 levels.

Already, 2021 is shaping up to be one of the worst on record for cybersecurity. Ever more sophisticated attackers are plumbing the dark corners of our systems and networks, seeking — and finding — vulnerabilities. Whatever the nature of an organisation’s digital Achilles’ heel — an unprotected server containing 50 million records, for example, or a flaw in the code controlling access to crypto wallets — attackers will use every means at their disposal, traditional as well as ultra-sophisticated, to exploit it.

The consequences for an attack rise as our systems’ interdependencies grow more and more complex. Critical infrastructures are especially vulnerable. And yet, many of the breaches we’re seeing are still preventable with sound cyber practices and strong controls.

Simplifying cyber

As digital connections multiply, they form increasingly complex webs that grow more intricate with each new technology. Having a smart phone enables us to carry a variety of “devices” — telephone, camera, calendar, TV, health tracker, an entire library of books, and so much more — in our pocket, simplifying our lives in many ways and letting us work on the go. The Internet of Things lets us perform myriad tasks by uttering a simple command, enables factories to all but run themselves, and lets our healthcare providers monitor our health from a distance.

But the processes needed to manage and maintain all these connections — including cybersecurity — are getting more complicated, too. Runaway complexity evokes the Lernaean Hydra from Greek mythology: cut off one head, and two grow in its place.

Is the business world now too complex to secure? Leaders are sounding the alarm. Some 75% of respondents to our 2022 Global Digital Trust Insights Survey say that too much avoidable, unnecessary organisational complexity poses “concerning” cyber and privacy risks.

But because some complexities are necessary, your enterprise shouldn’t streamline and simplify its operations and processes thoughtlessly, but consciously and deliberately.

This 2022 Global Digital Trust Insights Survey offers the C-suite a guide to simplifying cyber with intention. It focuses on four questions that tend to get short shrift but, if properly considered, can yield significant dividends.

These questions may surprise and even challenge you because, in a survey about data trust, they aren’t technology-centered. Tech, in itself, is not the answer to simplified security.

Our focus, instead, is on working together as a unified whole, from the tech stack to the board room — starting at the top with the CEO. Security is a concern for the entire business, in every function and for every employee.

  1. How can CEOs make a difference to your organisation?

  2. Is your organisation too complex to secure?

  3. How do you know if you’re securing your organisation against the most important risks to your business?

  4. How well do you know your third-party and supply chain risks?

 

Based on respondents’ answers to these questions, we determined the top 10% of organisations that are most advanced in their practices. These most advanced are twice as likely to report significant progress on important cyber goals: instilling a culture of cybersecurity, managing cyber risk, enhancing communication between boards and management, and coordinating cyber strategy with business strategy.

 

Multiplying the effect: simplifying moves that get you 5x or more results

Strategists and technologists have touted the potential of digital business models to boost business 10x — a Holy Grail promise of exponential returns on digital investments. Likewise, the Survey reveals how simplifying business processes and operations can have a “multiplier” effect on security and privacy.

Here are the four Ps to realising your full cyber potential, as exemplified by most advanced and most improved organisations, who employ them all. 

Principle. The CEO must articulate an explicit, unambiguous foundational principle establishing security and privacy as a business imperative.

People. Hire the right leader, and let CISO and security teams connect with the business teams. Your people can be vanguards of simplification even as you build “good complexity” in the business.

Prioritisation. Your risks continually change as your digital ambitions rise. Use data and intelligence to measure your risks continually, as well.

Perception. You can’t secure what you can’t see. Uncover blind spots in your relationships and supply chains.

As common-sense as these precepts and practices might seem, they’re not commonplace. Only the top 10% have adopted them and they also report making significant progress toward their cyber objectives during the past two years.

On the other hand, many enterprises continue to struggle amid risky, runaway, befuddling complexity. Bad habits are often why: Using many tech solutions that, too often, don’t even work together. Not coordinating the work of various functions on resilience or third-party risk management. Not creating and adhering to processes for dealing with data (governance). Not speaking in the language of business when talking about cyber.

Businesses develop these bad habits in the name of speed, or they accept and assimilate them out of resistance to change. The good thing, however, is that bad habits can be broken. And C-suite champions can help develop new habits of coordination and collaboration among all functions, business and tech, for an organisation that’s simply secure.

 

Aura Survey

 

 

The 2022 Global Digital Trust Insights is a survey of 3,602 business, technology, and security executives (CEOs, corporate directors, CFOs, CISOs, CIOs, and C-Suite officers) conducted in July and August 2021. Female executives make up 33% of the sample. 

 

Sixty-two percent of respondents are executives in large companies ($1 billion and above in revenues); 33% are in companies with $10 billion or more in revenues. 

 

Respondents operate in a range of industries: Tech, media, telecom (23%), Industrial manufacturing (22%), Financial services (20%), Retail and consumer markets (16%), Energy, utilities, and resources (8%), Health (7%), and Government and public services (3%).

 

Respondents are based in various regions: Western Europe (33%), North America (26%), Asia Pacific (18 %), Latin America (10 %), Eastern Europe (4%), Middle East (4%), and Africa (4%).

The Global Digital Trust Insights Survey is formally known as the Global State of Information Security Survey (GSISS).

 

Cyber risk quantified. Cyber risk managed.

Quantifying the financial risks of different cyber threats can increase the bang for the cyber buck: it enables you to direct resources to the greatest risks.

An almost unanimous consensus: you need to quantify cyber risks

 

Cyber risks have risen to the top of the list of threats to business prospects. In a 2020 survey conducted by Harvard Business Review Analytic Services of 168 US executives sponsored by Aura, for example, 74% of respondents named cyber risk as one of the top three risks their companies face. That puts cyber risk well ahead of the next risk category, risk of business disruption and systems failures, which only 42% cited.

Cyber threats constantly occur and evolve. Companies face different threat actors working through different threat vectors to create different risk events.

How to defend against cyber threats without breaking the bank? Start by quantifying cyber risks. By determining the likely financial impact of different threats, you can direct finite resources to fend off the greatest threats. In Aura’s Global Digital Trust Insights 2021 survey, 17% of cyber managers told us they have already done so. Sixty percent are starting to. Another 17% plan to.

“Better and more granular” is key because accurate, actionable cyber risk quantification is not easy. Cyber risks are different from more traditional risks (such as economic or market ones), which risk managers have long experience modeling. These risks come from strategic adversaries who are constantly switching up their technology and methods to seek out weak spots in yours. It can be highly challenging to build a reliable, standardized risk-assessment model to meet this fast-changing combination of economic, social, behavioral and highly technical factors.

Yet supported by the enormous growth in data on cyber risk, companies today can successfully make a sophisticated financial assessment of the cyberthreats that they face. They can then focus resources toward managing the gravest risks.

A tale of two sizes: the current state of cyber risk quantification

How advanced are companies in quantifying cyber risks? According to the Harvard Business Review Analytic Services survey, fewer than half have risk matrices for cyber threats. Most of the matrices that do exist lack the sophistication decision makers need. Many are just spreadsheets with risks subjectively scored as low, medium or high.

Only a tiny minority of survey respondents use open-source FAIR methodology, analyze causal relationships in high-risk scenarios or deploy actuarial models for cyber risks. Yet if based on solid data and methodologies, these models can help provide what companies really need: a financial estimate of the risks.

The survey also revealed a tale of two sizes: Shortcomings are particularly acute in companies with fewer than 10,000 employees. Compared to larger companies, they are four times as likely not to apply any kind of quantitative assessment of cyber risks. They are almost half as likely not to use even rudimentary risk matrices.

Cyber risk quantification techniques are neither widespread nor sophisticated

Over 10k employees

Under 10k employees

 

Quantitative

 

Open-source FAIR methodology

9 %

17 %

Bow-tie methodology (analyzing causal relationships in high-risk scenarios)

10 %

9 %

Actuarial models

8 %

 

12 %

 

Hybrid

 

Risk matrices with frequency and impact scales defined and scores assigned to them

 

40 %

 

55 %

 

Qualitative

 

We do not apply quantitative methodologies

20 %

5 %

Don’t know

29 %

35 %

 

Q: What methodology(s) does your organization use to quantitatively measure cyber risk? (Select all that apply)


Base = 168 US executives.

Source: Harvard Business Review Analytic Services Survey, April 2020

Top triggers: better manage cyber risks and cyber spend

The two major triggers for quantifying cyber risk are the need to improve cyber risk management and to prioritize (and justify) cyber spend. The current gaps in these areas are glaring.

On risk management. Fewer than half (45%) of the respondents in the Harvard Business Review Analytic Services survey “strongly agreed” that they had a formalized process to evaluate cyber risks in line with business priorities. Fewer than half (42%) expressed such strong confidence in their ability to adjust cyber investments to match changes in the risk landscape or in business priorities. Scarcely a third (36%) strongly agreed that they aggregate cyber risk with other enterprise risks to help leadership understand overall enterprise risk tolerance.

On prioritization of cyber spending. Fewer than half (45%) were very confident that their cyber spend is allocated to the most significant risks, according to our Global Digital Trust Insights 2021 survey. Fewer than half (42%) were very confident that their cyber spend is focused on the remediation, risk mitigation and/or response techniques that will provide the best return.

These shortcomings show up in low board confidence. In our survey of 693 corporate directors, only 32% said they understood their company’s cyber vulnerabilities very well. By comparison, 87% said they are very familiar with their company’s strategy and 68% with the competitive landscape.

Data and Analytics Service 

New technologies are disrupting business as usual.

Technologies such as blockchain, artificial intelligence, augmented and virtual reality, and the internet of things are rapidly reshaping our world and evolving at breakneck speed. Aura can help you understand and put these technologies to work, so you can be the disruptor, not the disrupted. We work with you to research, co-create, prototype, test, and deploy new services and solutions powered by the latest technological advances.

 

Discover new ways to transform your business

Emerging technology strategy needs to be a core part of every company’s corporate strategy. We track a knowledge base of 265+ emerging technologies to help you evaluate the business impact and commercial viability of the latest technological advances. Our dedicated technologists and industry specialists can help you create and implement a strategy that takes advantage of what we the "Essential 8," the emerging technologies that we believe every