by Mike Smith
The proliferation of personal computers (PCs) changed business forever. Four decades ago, staff first began bringing devices from home into the office to make their jobs easier. And companies scrambled to find ways to integrate PCs into their computer networks. Security wasn’t an issue back then, because organizations still relied on mainframes for most data processing functions; physical security (that is, securing the devices and various media holding the data) was the primary concern. Only years later did security move beyond the physical realm and become an active element of information technology as we know it today.
The rise of artificial intelligence (AI) presents many of the same issues associated with the introduction of PCs. Employees are finding ways to use AI to accomplish daily work tasks, and organizations are scrambling to develop policies governing its use at work.
AI benefits are many. It can help businesses automate tasks, improve customer service, and process large amounts of data to gain more insights and make better strategic decisions. Industry predicts AI will yield major benefits in healthcare, climate, education, engineering, and other fields. But AI is only as good as the data used; inaccurate data will still result in bad decisions.
AI can help businesses automate tasks, improve customer service, and process large amounts of data to gain more insights and make better strategic decisions. But AI is only as good as the data used; inaccurate data will still result in bad decisions.
AI also comes with risks. As more and more data are entered into large language models (LLMs), sensitive personal information or intellectual property could potentially be entered as well. Once data is in the LLM, anyone querying the model will have access to it. The fact that we create over 2 million terabytes of data daily means that our potential attack footprint is expanding.
Then, too, there are the risks of AI as a technology. It is becoming more proficient in hacking security systems and generating new malicious payloads. It makes deepfakes and fake news more indistinguishable from reality as well.
AI is becoming more proficient in hacking security systems and generating new malicious payloads.
Stephen Hawking said, “Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it.”
Organizations need to take a proactive approach to AI.
For these reasons and more, organizations need to take a proactive approach to AI and develop appropriate policies and procedures governing where AI use is permitted and where it is prohibited. The hope is that AI security precautions will outpace malicious AI. Skynet may be on the horizon, but it isn’t here … yet.