As a rule, I avoid this type of speculation because I don’t want to aid anybody in cracking/penetrating systems. However, AI attacks are happening or going to happen anyway. First, a lot of the global ‘attack surface’ is insecure by design and we need to acknowledge that and fix it. Second, AI presents unique challenges, and we are absolutely not ready to deal with it. Third, one of the things we should be doing on an ongoing basis is using AI to mitigate problems that arise due to AI. AI technologies introduce unique challenges to cybersecurity, as they can generate a broad spectrum of automated attacks and countless variations of known penetration techniques. The agility and scale of AI-generated threats demand a proactive approach to defense, one that uses AI itself to anticipate vulnerabilities and devise countermeasures before they are exploited. To meet this need, the proposed model uses Generative Adversarial Networks (GANs) trained on codified penetration techniques to inve
This is a response to a question on Quora about the challenges of developing AI systems capable of dealing with human emotions. Going from the other answers [on Quora] I would say that the first obstacle might be to get people to believe it is possible. I am strongly of the opinion that it is operationally possible and conceptually a “slam dunk”. By ‘operationally’, I mean that we can train an AI to recognize all the various signals that people use to convey their state of mind, including temporal, social, geographical, and cultural context. We can do that in the same relatively well-established way we use GANs to work back and forth with photos, speech and other types of input data. If we can assemble the data, we can create an AI that both recognizes emotional states and operates appropriately by expressing the appropriate emotional state as a response. Going from what we have seen in the past year, such a system would likely take less than a year to train, ground-up, both to ‘read’