Artificial intelligence (AI) is truly a revolutionary advance in computer science that is set to become an essential part of all modern software in the coming years and decades. It is a threat, but also an opportunity. Deploy AI to increase defensive and offensive cyber operations. In addition, new methods of cyberattacks can be invented to exploit specific weaknesses in AI technology. Eventually, the importance of data will increase AI's appetite for a lot of training data, again as we think about data protection. First-class healthcare management is needed to ensure that this weather forecasting technology brings as much shared safety and well-being as possible. Artificial intelligence is the simulation of human intelligence processes using machines, especially computer systems. Specific AI applications include expert systems, natural language processing, language recognition, and machine vision.
Generally speaking, AI refers to computational tools that can replace human intelligence in performing certain tasks. This technology is now evolving at a rapid pace, comparable to the exponential advances that database technology experienced at the end of the 20th century. Databases have grown into a core infrastructure that powers enterprise-level software. Likewise, most new value-added software is expected, at least in part, to be driven by artificial intelligence in the coming decades.
In recent decades, databases have evolved enormously to handle a new phenomenon called "big data." It covers an unusually large and global range of modern data files, most of which are collected from computer systems that mediate almost every aspect of daily life. For example, YouTube receives more than 400 hours of video content per minute (Brouwer 2015).
For example, researchers train computer models to more accurately identify a person's personality traits than their friends based only on Facebook messages they like.
Big data and AI have a special relationship. Recent advances in AI development come largely from "machine learning." Instead of dictating a static set of instructions that AI must follow, this technique trains AI using large data sets. For example, artificial intelligence chatbots can be trained on datasets of text conversations of human conversations collected from messaging applications to find out how you can understand what people are saying and provide appropriate answers (Pandey 2018). You could say that big data is the raw material that drives algorithms and artificial intelligence models.
The main limitation of innovation is no longer the difficulty of recording and storing information, but the search for useful knowledge among the vast amounts of data collected today. Artificial intelligence can detect patterns in mammoth datasets that go beyond the ability to detect human perception. In this way, the adoption of AI technology can make even everyday and seemingly unimportant data valuable. For example, researchers have trained computer models to identify a person's personalities more accurately than their friends, and only on the basis of Facebook posts that the individual likes (Wu, Kosinski, and Stillwell). 2015).
Almost a day goes by without reports of a major data leak or cyber attack that has cost millions of dollars in damage. Cyber losses are difficult to estimate, but the International Monetary Fund puts them in the range of $ 100 billion to $ 250 billion a year for the global financial sector (Lagarde 2012). In addition, with the ever-expanding proliferation of computers, mobile devices, servers and smart devices, the overall exposure to threats is growing every day. While business and political communities are still struggling to understand the newly discovered significance of the cyber empire, the use of artificial intelligence in cyber security is announcing even greater changes.
One of the key goals of AI is to automate tasks that previously required human intelligence. Reducing the work resources that an organization must use to complete a project, because the time an individual has to spend on routine tasks can bring many efficiencies. Chatbots can be used, for example, for customer service inquiries, and a medical assistant AI can be used to diagnose diseases based on patients' symptoms.
In a simplified model of how AI can be used in cyber defense, logs of recorded server and network component activity can be marked as "hostile" or "hostile" and the system in AI can be retrieved with this dataset. classify future observations into one of two classes. The system can then act as an automatic patrol, singing from unusual observations from a lot of background noise during normal operation.
This type of automated cyber defense is essential to address the extreme levels of activity that now need to be controlled. We have overcome a level of complexity where defenses and identification of bad guys can be done without the use of artificial intelligence. In the future, only systems that apply AI to a task will be able to handle the complexity and speed of cybersecurity.
Continuous training of such artificial intelligence models is important because, just as artificial intelligence is used to prevent attacks, underwater actors of all kinds also use artificial intelligence to identify patterns and identify weaknesses in their potential target. The state of the game is a battlefield, where each side constantly explores the other and creates new defenses or new forms of attack, and this battlefield changes every minute.
Perhaps the most effective weapon in the hacker arsenal is "spear phishing" - the use of personal data collected for its intended purpose to send a personalized message. An email that appears to be written by a friend or a link related to the target hobbies has a great chance of avoiding suspicion. This method is currently quite labor-intensive, which requires the so-called hacker to conduct a practical detailed survey of each of the intended targets. However, chatbot-like artificial intelligence can be used to automatically generate personalized messages for many people with data obtained from their browsing history, e-mails, and tweets (Brundage et al. 2018, 18). In this way, a rival player can use AI to reinforce his offensive operations.
Artificial intelligence can also be used to automate the detection of software security vulnerabilities, such as "zero-day vulnerabilities." It can have a legal or criminal intent. Software designers can use artificial intelligence to test their product for security holes, such as detecting unknown fraudsters in operating systems.
Artificial intelligence will not only complement existing strategies for mismanagement and defense, but will also open new fronts in the war on cyber security, as insidious actors seek ways to exploit certain technological vulnerabilities (ibid., 17). A new method of attack that enemy actors can use is "data poisoning". Because artificial intelligence uses data for learning, rogue actors can damage the dataset used in AI training to make it what they want it to be. "Opposite examples" could lead to another new form of attack. As with optical illusions, conflict simulations involve changing the input data of an artificial intelligence in a way that one is unlikely to notice, but it is calculated that the artificial intelligence incorrectly classifies the input in some way. In a widely speculated scenario, the stem can be subtly modified so that the AI system that drives the autonomous car incorrectly identifies it as a yield mark, which can have fatal consequences (Geng and Veerapaneni 2018).
AI technology will change the cybersecurity environment in a different way, because their hunger for data changes what kind of information is a useful asset and changes the stock of information that once was.
While some cyber attacks aim solely to disrupt, inflict damage or wreak havoc, many intend to capture strategic assets such as intellectual property. Increasingly, aggressors in cyberspace are playing a long-term game, looking to acquire data for purposes yet unknown. The ability of AI systems to make use of even innocuous data is giving rise to the tactic of “data hoovering” — harvesting whatever information one can and storing it for future strategic use, even if that use is not well defined at present.
A recent report from The New York Times illustrates an example of this strategy in action (Sanger et al. 2018). The report states that the Chinese government is involved in the theft of personal data from more than 500 million customers of the Marriott hotel chain. Although the primary concern about data leaks is the possible misuse of financial information, in this case the information can be used to track down suspicious spies by investigating travel habits or to track and direct individuals to use them as negotiation tokens for other items. .
Data and AI connect, integrate and unlock intangible and intangible assets; they should not be considered otherwise. The amount of data has become a major factor in business success, national security and even, as the Cambridge Analytica scandal has shown, politics. The Marriott incident shows that relatively common information can now be a strategic advantage in intelligence and national defense, as artificial intelligence can beat useful insights from seemingly different sources of information. Therefore, this type of aggregate data is probably a more common target for actors in this field.
This rapid development will force a reassessment of dominant cyber security strategies. In an interconnected system, identifying the weakest link may be more difficult, but more important. As sensors, machines, and humans become interconnected data providers for valuable AI systems.
some popular ai products