AI "out of control": a problem that has to face today

When we have been discussing the impact that AI can have on Internet security, one issue may have been overlooked: AI itself is not safe.

The two days of news properly reminded us of this. Recently, Google was exposed to the serious security risk in its machine learning framework TensorFlow, which can be used by hackers to create security threats. Google has confirmed the loophole and made corrections.

Although it was discovered in advance that the loopholes themselves did not pose a real threat, this news still made some people feel uneasy. The machine learning frameworks such as TensorFlow, Torch, and Caffe are almost the standard configuration of today's AI developers and researchers, but these platforms have recently been exposed to the existence of security holes and the possibility of being exploited by hackers.

In a sense, these messages are reminding us of the same problem: When we eagerly gather funds and user relationships in machine learning, it may also be a matter of bundling huge security issues.

What's more, in the face of AI security issues, most of us are still very foolish and naïve, "ignorant state," and we know almost nothing about its logic and harmfulness.

This article hopes to popularize these contents, after all, take preventive measures. In addition, developers and companies must be reminded that while Google’s big companies are sparing no effort to promote their own machine learning platforms, and in order to attract users to quickly iterate and release a large number of free resources, the developers themselves must keep an eye on them, and they must not leave without thinking. use.

More review mechanisms and more rigorous security services are worthwhile than painstaking efforts.

Devil in Blind Spot: The Security Hidden in Machine Learning Framework

It is not a joke to say that the loopholes in the machine learning platform may cause the developer's efforts to go down the drain. In the case of ransomware in the first half of this year, we have seen how horrible today's hacking attacks are, and ransomware itself exploits vulnerabilities in Windows to lock down endpoints for targeted attacks.

It can be said that after the baptism of the ransomware virus, the information industry has entered an era of "loophole hegemony." As long as it has more loopholes, it has a wide range of control and control. As the tools and thresholds for hacking attacks are reduced, attackers with abilities can use platform vulnerabilities to launch widespread attacks.

However, when we increasingly value the "security industry" to bring security risks to today's world, we have involuntarily created a blind spot of sight, which is artificial intelligence.

The basic flow of most current AI development tasks is this: In general, a developer who wants to develop a deep learning application or system from scratch is an extremely troublesome and almost impossible task. So developers will choose to use the mainstream development framework. For example, Google TensorFlow was exposed for security risks.

With this type of platform, developers can use their AI capabilities, combined with open source algorithms and models, to train their own AI applications. This is fast and efficient and can also absorb the most advanced technical capabilities. The logic of such "cannot let car manufacturers start from the wheel of development" is of course correct, but the question is, if there is a problem in the wheel itself?

Because a large number of developers have concentrated on the use of machine learning frameworks to train AI in the past two years, there have been no security problems with similar platforms. Therefore, the security factors in this field have not been taken seriously. Most AI developers may never Never thought there would be a security issue.

However, this time, the discovered loopholes show that: using TensorFlow's own system vulnerabilities, hackers can easily create malicious models to control and tamper with AI applications that use malicious files.

Because a deep learning application that is put into use often requires a complicated training process, attack points of a malicious model are difficult to be perceived for a short time. However, due to the logical correlation within the agent, it is likely that a single point being hacked will be completely controlled. The security risks caused by this situation are obviously worse than those of the Internet era.

Understand this, we may reach an unsatisfactory consensus: the AI ​​that we have been worrying is out of control may not be because the AI ​​is too clever to want to seize power, but it is punished by unscrupulous hackers.

AI "out of control": a problem that has to face today

One of the biggest changes in artificial intelligence, especially machine-learning tasks, compared to the classic computational information storage and interaction model is that it exhibits the integrity and synergy of information processing. The famous AlphaGo, for example, does not give a fixed response pattern to each game, but instead predicts and self-reasonses the game. Its wisdom is not a collection of several pieces of information, but a complete "ability."

This is the advantage of AI, but it is also likely to be the weakness of AI. Just imagine that if a training model in AlphaGo is hacked, for example, when the system eats the opponent's piece, it doesn't play. What will eventually show up will not be a miscalculation of a certain chess move, but it will simply not win a game of chess.

To put it bluntly, AI is destined to be a thing that triggers the whole body. Therefore, the security risks caused by platform vulnerabilities are particularly terrible.

After all, AlphaGo is just a closed system. Even if it is attacked, it is not a big deal. But more and more AIs have been trained to deal with real tasks, and even extremely critical ones. Then once it is tackled at the platform level, it will bring incalculable danger.

For example, the self-driving car's collective failure in judgment, the control of hackers in the IoT system, the sudden collapse of AI in financial services, the collapse of the AI ​​system in enterprise-level services, etc. thing.

Due to the close and complex connection relationship of AI systems, many key applications will belong to the back-end AI system, which in turn depends on the training model provided by the platform. Then once the last-end platform falls, it will inevitably lead to a large-scale, chain-like crash. This may be the most AI that we should worry about today.

The risk of the AI ​​industry lies in that once a hacker has captured the underlying vulnerability of the machine learning platform, it is equivalent to blowing up the lowest level of the entire building. This logic has rarely been noticed before, but it has been proven that it may exist. The most frightening thing is that in the face of more unknown loopholes and dangers, AI developers around the world are almost helpless.

Home and Country: Inescapable AI Strategic Wrestling

After recognizing the underlying issues that may arise in the AI ​​development platform and its serious dangers, we may think of AI security and strategic wrangling at the national level.

In July this year, the “Artificial Intelligence and National Security” report released by the Bellevue Center for Science and International Affairs at the Kennedy School of Political Science at Harvard University specifically pointed out that AI is likely to revolutionize most of the national industry in the coming period of time. Impact, become a key application in the industry. Then once the AI ​​security is threatened, the entire U.S. economy will be hit hard.

The same reasoning applies, of course, to China, the great AI that today competes with the United States. After the TensorFlow security breach was exposed, we contacted a startup company in the direction of domestic machine vision. All the training models they used came from the community sharing in TensorFlow. The conclusion after the communication is that if they are really attacked by malicious models of hackers, their products will suddenly collapse.

This is just a startup company. It is understood that the use of TensorFlow training in China also includes large companies such as JD.com, Xiaomi, ZTE, and R&D projects of many scientific research institutes. In the future, it is very likely that more and more important Chinese AI projects will be deployed and deployed on this platform. When these things are exposed to hacker attacks, and even control is in the hands of other countries, can we really rest on such an AI development path?

This is definitely not worrying. After the outbreak of the ransomware virus, the source of these hacking tools will come from the network attack weapons developed by the US intelligence system. Weapons, this kind of thing, are manufactured for the purpose of killing and wounding. Whether it is used by a manufacturer or it is stolen after being stolen, the ultimate loss can only be the group of people who are not guarded against.

Under various possibilities, the AI ​​security issue is no longer a child's play today. The Chinese industry can do at least two things: one is to establish a professional AI protection industry and upgrade the Internet security to AI security; the other is to gradually reduce the dependence on foreign Internet companies' framework platforms. This is certainly not a populist closed-door policy. Instead, developers should be given more choices, so that the entire industry will naturally move closer to the national AI security strategy.

In short, the security protection of AI itself has become a link where developers must care, large platforms need to take responsibility, and national competition needs competition. It is hoped that you will never see AI runaway events. After all, eating a long and smart thing has happened too much in the history of the Internet.

Bluetooth Speaker

Louder Bluetooth speaker is very popular with Europe and America people due to simple and applicable appearance design,

Our as sociated factories cover an area of 1900square meter s and produc tion capacity 100000 pcs per month, We are devoting on the resear ch and development moldingand production. We have experienced and efficient employees to make our private model in excellent conditions.



The wireless bluetooth speaker supports the function bluetooth player/U disk/TF card/auxiliary player/hands-free call.

TWS can be connected in series. After the two wireless speaker are turned on, press one of Bluetooth speaker the[ M "button and hold for 3 seconds. The two speaker will be automatically paired, and then search for Bluetooth to connect to from phone.

The hands-free call Bluetooth speaker allows you to not miss any calls when listening to music, and can be used outdoors, family gatherings, parties, etc.


The louder Bluetooth speaker can support maximum to 40W speaker subwoofer, and the sound is very shocking. Equipped with a nylon and leather strap, it can be carried with you or cross-body, it is very convenient to carry out, and the use of waterproof fabric, the highest IPX6 level, so you have no worry when traveling outdoors.




Bluetooth Wireless Speaker, Waterproof Bluetooth Speaker, Rechargeable Blutooth Speaker

Shenzhen Focras Technology Co.,Ltd , https://www.focrass.com