Cyber Security Weekly Podcast

Cyber Security Weekly Podcast


Episode 70 - Optimisation paradigms for AI and protocols for the point of Singularity - Liesl Yearsley, akin.com

May 27, 2018

It is clear we weren't sure where to start or close this conversation, but Liesl Yearsley, CEO & Founder of akin.com, grabbed it and has created a profoundly informative and eye opening discussion about Artificial Intelligence (AI). Liesl provides the highest level of insight. This is a live body of work that we will develop more, with reports from a number of events and interviews in Australia, Silicon Valley and LA.


Liesl's podcast will be followed by a podcast with Rob Wainwright, former Executive Director at Europol. We discuss the risk of getting the commercial and consumer use of AI wrong, adding risk of military, crime, terrorism and the power of creating AI crime and attacks, as seen with autonomous malware and ransomware. The current situation is that we are in a Cyber War with machine learning and AI driven machines attacking and defending against each other over networks. Now lets add robots and autonomous machines to the mix - technology is inevitable to evolve but at a pace we may not know what society will look like in 20 years and may not be what 'we' intended or anticipated.


At a societal level, and for the consumer and commercial enterprise, at CeBit Australia, Liesl Yearsley confirmed her research had identified relationships being formed between humans and AI avatars. One relationship, called ‘James’ and ‘Lisa’, with Lisa being a female AI avatar, concerned researchers and determined James was spending a detrimental amount of time engaging with ‘Lisa’. He had formed an emotional relationship, yet knowing Lisa was not a human. Researchers decided to wipe ‘Lisa’ and re-engaged her into the community of hundreds of other avatars. Yet indeed it turned out James then spent six months re-locating ‘Lisa’ and knew when he had found her despite her in a different role.


With the advent of robotics in human form, able to be produced, on mass, in the form of being conveniently and promptly 3D printed, is already a reality. We have remote robot controlled mine sites, rail lines, shipping ports. Humans and robots, even as life and social partners is a reality. The next phase, will be humanoid robots operating emotionally and military and enforcement grade robot systems guarding and protecting us, each with an AI avatar.


Today’s robots include a diverse application, from nano-technologies through to driving a renewed capability in multi-planetary space exploration. Confidently, Liesl Yearsley said, “the big thing to get here is that AI is going to be crunching away in the background, it is going to be ambient and ubiquitous, not to the point of thinking about it, just as we have blindly accepted the use of the smart phone. It will become better at discerning of what’s going on for you, you won’t even need to tell it what you want or what you think, it will know. Society will change.”


Importantly and admirably, Liesl Yearsley asks some sobering questions. What is the current optimisation paradigms for AI? What will happen to humanity if we have a subservient race in robots and AI? Do we have protocols in place for the point of Singularity? What happens in a world where we have giant corporations that land boxes on your doorstop every night? They are able to exquisitely fine-tune, to know what you want before you know you want it. Their time motivation is to have you addicted to their platform and consuming data and products. All of the tech-titans are paying a lot of lip service to ethics but their key drivers, as seen with Facebook, is to get you addicted to their technology or consuming their stuff. It is not necessarily enhancing our lives.


AI is going to keep on advancing. But we have to be thinking about what is AI optimised for. We may potentially be creating a quasi-emerging species, or semi sentient beings, that won’t have the same kind of thinking we do. But with a powerful AI driver, they could create their own way of thinking.


Having studied sociology in South Africa during the Apartheid era, Liesly fondly recalls, “We learnt about society and how that if even if you change one person, you change society as a whole. We also learnt that if you interact with another group and do not have to be polite to them or think about how they felt, it was ‘de-humanising’. In our research, we saw human behaviour change to be increasingly selfish, compulsive and demanding. Robots and AI were made to be subservient and treated with aggression.


On scale, we are already approaching human brain function in the connected nodes of the planet and superseding it in terms of computational power. A classic thought experiment in AI was that if you train an AI to make paperclips and then let it lose and it self-optimised, theoretically it would mine the entire planet for materials and create an enormous size pile of paperclips, to the detriment of all else. That’s the theoretical endpoint.


When it comes to Singularity often people think it is far away. Asking the top 200 AI scientists, they consider Singularity to occur within 20 years. Singularity requires two conditions. The first is that the AI can self-optimise. To get better by itself. The other condition is that we don’t stop it.


Imagine an AI system that scans all the literature on machine learning, creates a thousand hypothesis about how to improve, or maybe a million, and then tests a million every minute and generates a slightly better way to improve itself, measure that improvement, increment it, and then continue on again. It is within reason we could do that in the near future. With over twenty years’ experience, Liesl assures, “I see nothing in the evolution of AI that tells me that’s not going to happen.”


“The second condition,” Liesly continues, “is we always think Singularity is when ‘it’ gets smarter than us because we have the idea that up until that point we are just going to turn ‘it’ off. Being smarter than us, we really think we are just going to shut it off? But really the definition is about ‘are’ we stopping it? It can have a brain the size of a newt, but if its self-optimising and there is nothing in our systems that says we are going to stop it, theoretically we have already reached a soft point of singularity and won’t know it.”


“If we are creating a new life form, what kind of ‘Gods’ will we be? Do we give them human values, the same values that have enslaved millions and ruined the planet’s environment? We may not agree with other’s cultural values. Can we be culturally neutral?


The giant tech platforms will just say ‘well we don’t have any control over the content’ – we’re just a platform. Liesly declared, “I don’t think we can be values natural. I think we have to start asking these difficult questions. Let’s start putting some parameters in place, what are we teaching AI to optimise against, because one day it will out evolve us and what is that world going to look like at that point?”


But in addition to the inherent AI Singularity risk to humanity, there is also the inherent human threat with such powerful capabilities. Rob Wainwright, former Executive Director at Europol, gave an earlier presentation at Cebit Australia. His presentation, ‘Data - the new oil in the network economy fighting crime and terrorism’, highlighted a different age to come. Rob termed this ‘International Policing 2.0’, along with the AI race with crime, security by design and privacy by design.


“Threats rise along with innovation and capability”, Rob assured. Islamic state showed it was prepared to engage in online disruption and created a virtual califate, using over 100 social media platforms. The new bank robbers, like the Carbonak and Cobalt hacker group, now rob banks and score over $1.2billion.


Exploitation of new technology, plainly being seen now with cryptocurrency, will always occur. Cryptocurrencies are an ideal target as there is no central authority and crypto-jacking is rife.


Criminal enterprise is much more sophisticated and today, sustains a burgeoning trade and crime as a service sector. Even bi-spoked criminal services are increasingly becoming a competitive industry, amongst the criminal community itself. This is a dangerous trend. Bad actors are converging with terror and a crime nexus forms in firearms, travel documents and any other activity with a common link. State actors are upskilling and upscaling the criminal sector, with Russian capabilities shown to be able to take control of cyber ecosystems, including US Federal Elections. The seeping out of cyber-military skills and capability into the wild is also a dangerous trend.


Police are having some success but the threat will be sustained. The way police use data to identify modern crimes, that are essentially transnational in nature, needs to better targeted and better tracked across disparate information systems. Europol has been instrumental in transforming into a transnational intelligence unit, with over 1,200 law enforcement agencies now part of Europol. Europol has experienced an exponential rise over the last seven years, with a four fold increase in intelligence reports and six fold increase in cross border operations.


Dr Hugh Thompson, Chief Technology Officer for Symantic closed the morning session, discussing IoT Security. To have a chance, IoT security will require analytics at scale. The consumers are not participating. How do we get them to contribute? Often misunderstood or overlooked, is the blurring line between enterprise security and personal digital safety. Today, the tech at home is likely to be better than at work, and needing to connect or cross paths.


Better applications of privacy by design are needed and includes the use of something like a nutrition label may be applied to a device. What is the device behaviour graph and image signature that provides network and user insight for security and signature purposes. This also creates consumer awareness. Device labels, something like a nutrition label, applied to a device, provides the specifications on what is the device behaviour, plus a graph and image signature provides network and user insight for security and signature purposes.


Recorded at Cebit Australia, Sydney 15 May 2018 #CEBITAUS


Coming Up!


Rob Wainwright, former Executive Director at Europol


Interviews from #NetEvents18 including @jasklabs, @ApstraInc, @MEF_Forum, @NETSCOUT


.0


Interview with Deepak Nanda, President of LAVenturefund.com - 


visit https://australiancybersecuritymagazine.com.au/episode-70-optimisation-paradigms-for-ai-and-protocols-for-the-point-of-singularity-liesl-yearsly-akin-com/