top of page

The AI 'Crystal Ball' Predictions for 2024

Published: January 22, 2024 on our newsletter Security Fraud News & Alerts Newsletter.



With artificial intelligence continuing to grab the headlines, many wonder what AI-enabled cybercrime will look like in the coming year. From deep fakes to AI-enhanced phishing emails, the risks to online security are greater than ever before.


Below are three of IBM's most prevalent "crystal ball" AI-related cybercrimes we can expect to see in the new year.


GenAI


GenAI, or generative AI, creates text, images, video, and other media based on data input (think ChatGPT). GenAI will help cybercriminals achieve their goals in a fraction of the time it once took. In general, this will enable bad actors to deceive businesses and the public at large as well as individual users. GenAI phishing tactics, audio, and deep fakes will be used by cybercriminals with laser precision. They'll also use GenAI to organize reams of data in minutes. Doing so allows hackers to create profiles and optimize potential targets ripe for cybercrime.


Lookalike Behavior


For businesses, employees whose behavior online is predictable should ring alarm bells when that behavior differs from what's expected. For cybercriminals, using AI to steal employee identities is a goal helping with cybercrimes like socially engineered attacks. These attacks manipulate a victim into breaking with business norms and unknowingly help an attacker gain access to a system. Still other attacks use AI to steal a victim’s identity and their unusual behavior online should raise red flags. Strict security protocols and bolstered passwords are key to avoiding AI-generated lookalike behavior.


"Worm-like" Effect


This effect takes a malicious attack and enhances it to where the cybercriminal wants the outcome to be. It starts with one attack and worms its way into something greater by abusing an AI platform used by a business. As the use of these promising platforms grow, you can bet cybercriminals will find emerging tactics to exploit them. Although these AI-based worm-like attacks are on the horizon, those at IBM say it's just a matter of time before they become the norm.


Current and Future Advice


We don’t know at any given time which, if any of these will strike us, we can be proactive in caring for our own data.


Have a code word or phrase ready. There have already been successful AI deep fake attacks. If you receive a phone call from someone with an urgent issue and they are asking for sensitive information or money, have a code word ready. If the other person doesn’t know the code, it could be AI calling you. Alternatively, ask something only you and they are likely to know.


Call right back. If you want to make sure you’re not sending your money off to a scammer, call the person you think it is right back from a number you already know.


Question the oddities. If someone’s behavior is out of the norm, check on it. Deviations from regular behavior could signal AI.


Create solid passwords. This is old news. Make sure all passwords are unique to each site you use, have at least eight characters, use numbers, and a whole mix of things to make them really difficult to guess or crack. Change them regularly to keep AI on its toes.


Patch and update. To avoid the worms, keep your systems and software up to date and make sure that when a patch is issued, apply it right away!


Although a crystal ball isn't exactly the definition of facts, those in-the-know are able to predict with some certainty the future of AI-based cybercrimes. Opportunists that they are, one thing we do know is that cybercriminals are figuring out how to exploit AI today and in the future. So, stay tuned.


Want to schedule a conversation? Please email us at advisor@nadicent.com


bottom of page