Thank you for Subscribing to Gov Business Review Weekly Brief

Human Firewalls


The Weakest Link
As technology evolves, the biggest cyber security risk remains the human factor, and the stakes for local governments couldn’t be higher. A single attack can cripple constituent services or even bankrupt smaller municipalities. The percentage of successful cyber-attacks or data breaches that involve human error as a root cause is extremely high – an estimated 88 to 95%. "Human Error” is a broad category that includes several distinct actions, not just simple mistakes. When such a high percentage is cited, it usually includes all incidents where a human action inadvertently or negligently contributed to the breach, such as: • Clicking on a phishing link or providing account information to a deceptive email. • Using default, simple, or the same passwords and failing to secure them. • Misconfiguring servers, cloud platforms, or firewalls that expose sensitive data. • Sending sensitive data to the wrong recipient, losing devices, or not following data handling procedures. While hackers rely on technical vulnerabilities, they most often gain access by exploiting human vulnerabilities like trust, distraction, haste, or lack of training. AI Powered Threats The introduction of sophisticated AI technologies, particularly Large Language Models (LLMs) and autonomous "AI agents," has fundamentally changed the cyber threat landscape. AI is no longer just assisting hackers; it is now organizing and executing entire attack campaigns with minimal human intervention. AI agents can scan a target network for vulnerabilities and then generate code to capture credentials, elevate privileges, and extract data with minimal human interaction. They can also monitor networks in real-time and continuously change their attack strategy which is difficult for humans to defend. AI systems can submit thousands of requests per second against multiple targets at the same time. AI has all but eliminated the “red flags” that people used to help detect phishing like bad grammar and generic content. LLMs can create almost perfect content with flawless grammar and personalization to match an individual’s style and voice. Deepfakes can be used to impersonate employees or vendors during video or phone calls bypassing authentication based on voice recognition. AI can also be used to create realistic online profiles to gain employee trust before launching an attack. AI enables malicious code to adapt and evade detection by generating and rewriting itself with a unique signature for every execution. Malware can use AI to analyze its environment and change its behavior in real-time to go undetected by security tools. AI can also be used to scan for zero-day vulnerabilities and dynamically create code to exploit a specific system before a solution is available.The most effective strategy for minimizing cyber risk is treating employees as the primary defense layer rather than the weakest link.