Connect with us

Cybersecurity

Preventing cyber-attacks on e-voting systems in the US election with WEBINT

Avatar photo

Published

on

Back in the 1960s, when an e-voting system was introduced, it used an optical scanner to count the votes. The latest e-voting systems have become more sophisticated and employ internet voting which allows the electors to cast their votes from anywhere using the internet. While these systems play a significant role in forming the democratic structure, they are always vulnerable to cyber-attacks. This not only makes a cyber-crime investigation a need of the hour but also indicates that a reliable cyber-crime investigation process should be in place to counter and mitigate electoral fraud.

Recently, cyber-attacks on the e-voting systems in the US elections have been a part of the debate. For instance, it was widely speculated that the 2016 US presidential elections were influenced by some cyber-related incidents. So, here we’ll discuss how cyber-attacks can influence an e-voting system and what effective strategies can be applied to mitigate these cyber threats.
Vulnerabilities in e-voting systems
Cyber-attacks may undermine the integrity of the entire electoral process by successfully exploiting the vulnerabilities of an e-voting system. The main targets of these attacks can be vote registration and counting systems, websites that publish the election results, result transmission technologies, and other online services related to the election. Stakeholders of the electoral process may also fall victim to these random attacks. Website breaches, ransomware and malware attacks, and Denial-of-Service or DoS attacks are a few examples of such cyber-attacks on e-voting systems.

While these generic attacks may not require a lot of resources and sophistication, they can simply flood the available online resources and websites to make them inaccessible.
On the other hand, more sophisticated cyber-attacks are carried out to access the internal systems, voter’s database for identity theft, and other private information. These are well-planned cyber operations controlled by a very well-funded adversary such as a nation-state. Some very advanced methods like “zero-day exploits” are used to successfully execute these attacks, causing severe damage to the whole e-voting system.

Fighting e-voting cyber-threats with Artificial Intelligence
While we have briefly reviewed some of the vulnerabilities of an e-voting system and how cyber-criminals can manipulate the election outcomes, what role can AI play to effectively mitigate these threats? Well, AI-based solutions have proven to be exceptionally useful in countering cyber-threats. Using advanced data analysis techniques, AI can detect some hidden patterns in the data and alert the organizations about a potential cyber-attack before it even happens.
AI-based cyber-security systems such as web intelligence (WEBINT) employ powerful algorithms to extract meaningful insights from the given data. They get better and better with time and continuous learning process. To put it simply, an AI system would be able to accurately detect and identify any threats from the data before any human could. It helps human resources focus more on the other important tasks.

Traditional threat hunting methods heavily rely on past outcomes and cannot learn from the new data as AI can. Furthermore, hackers employ advanced methods and tricks frequently. Therefore, only AI-driven techniques can successfully detect the amount of threats organizations face these days.
Advanced web intelligence (WEBINT) system scans through every detail to detect anomalies and unusual patterns and provides real-time information and a detailed report of the cyber incidents in the system. Plus, it may prove to be a very effective solution to secure an e-voting system.
Mitigating cyber-attacks on e-voting systems cyber-crime investigation solutions
Advanced cyber-crime investigation solutions can counter and mitigate cyber-attacks on e-voting systems. These AI crime prediction tools can easily track any cyber-related incident and extract meaningful data from unwanted cyber criminal activities. In addition, you get timely notifications of the potential threats. The system makes use of advanced AI-based algorithms to identify the unusual behavior of the cyber-criminals.

Cybersecurity

Connected Car Security in 2026: Top Threats and How Automakers Are Fighting Back

Published

on

Bar chart showing the rise in connected vehicle cyber incidents from 2020 to 2026 highlighting the growing need for automotive cybersecurity

The modern vehicle is no longer simply a machine that gets you from point A to point B. Today’s cars are rolling data centers — equipped with dozens of electronic control units, over-the-air update capabilities, and constant cloud connectivity. While this transformation has delivered extraordinary convenience and safety features, it has also created a vast new attack surface for cybercriminals. As we move deeper into 2026, connected car security has become one of the most critical priorities for automakers, fleet operators, and regulators worldwide.

A growing body of research confirms the scale of the problem. Industry analysts documented nearly 500 publicly reported automotive cybersecurity incidents across the mobility ecosystem in 2025 alone, a sharp year-over-year increase that shows no signs of slowing. Remote attacks — carried out over cellular, Wi-Fi, and Bluetooth interfaces — now account for the vast majority of these incidents, underscoring how the connected nature of modern vehicles has fundamentally changed the threat landscape.

Why Connected Car Security Is More Urgent Than Ever

Several converging trends are amplifying cybersecurity risk in the automotive sector. First, the number of connected vehicles on the road continues to climb rapidly. Estimates suggest there are now well over 400 million connected cars in active use globally, each one a potential target. Second, the rise of software-defined vehicles (SDVs) means that an increasing share of a car’s functionality — from braking to infotainment — depends on software that can be updated, modified, or compromised remotely.

Third, the financial incentives for attackers have grown. Keyless car theft, which exploits vulnerabilities in CAN bus communication protocols and relay attack vectors, has become a widespread problem in markets across Europe, North America, and Asia. According to law enforcement data, vehicles equipped with keyless entry systems are disproportionately targeted, with some models experiencing theft rates many times higher than their conventional counterparts.

The regulatory environment is also tightening. The UNECE WP.29 regulations — specifically UNR 155, which mandates cybersecurity management systems for all new vehicle types — have raised the compliance bar significantly. OEMs that fail to meet these standards risk being unable to sell vehicles in major markets.

The Most Common Connected Car Attack Vectors

Understanding where the vulnerabilities lie is the first step toward effective protection. The primary attack vectors targeting connected vehicles today include:

Attack Vector Description Risk Level
CAN Bus Injection Attackers send malicious commands through the vehicle’s internal Controller Area Network Critical
Relay/Keyless Entry Attacks Signal amplification tricks used to unlock and start vehicles without the physical key High
Telematics & OTA Exploits Compromising cloud-connected telematics units or intercepting over-the-air software updates High
Infotainment Breaches Exploiting vulnerabilities in entertainment systems to pivot into safety-critical networks Medium–High
V2X Communication Spoofing Injecting false data into vehicle-to-everything communication channels Emerging

Each of these vectors requires a different defensive strategy, which is why the industry has increasingly moved toward unified, platform-level security approaches rather than piecemeal point solutions.

Automotive Cybersecurity Best Practices Driving the Industry Forward

Leading OEMs and Tier 1 suppliers have begun adopting a set of cybersecurity best practices that are rapidly becoming the standard for the industry. These include:

Security-by-design architectures. Rather than bolting on security after the fact, forward-thinking manufacturers are embedding AI-powered cybersecurity directly into the vehicle’s electronic architecture from the earliest design stages. This “shift left” approach catches vulnerabilities before they reach production.

Intrusion detection and prevention systems (IDPS). In-vehicle IDPS solutions monitor network traffic across CAN, Ethernet, and other protocols in real time, detecting and blocking anomalous behavior before it can escalate. Advanced solutions filter noise at the edge, reducing the volume of data that needs to be transmitted to cloud-based security operations centers.

Vehicle Security Operations Centers (VSOCs). Cloud-based VSOCs aggregate data from millions of vehicles to detect fleet-wide attack patterns, correlate threat intelligence, and coordinate incident response. The combination of edge detection and cloud analytics creates a defense-in-depth model that mirrors best practices from enterprise IT security.

Automated DevSecOps. Security testing — including fuzz testing and software bill of materials (SBOM) vulnerability scanning — is being integrated directly into CI/CD pipelines, ensuring that every software release is vetted before deployment.

Regulatory compliance frameworks. Aligning with ISO/SAE 21434 and UNR 155 provides a structured approach to managing cybersecurity risk across the entire vehicle lifecycle, from concept through decommissioning.

How the Industry’s Leaders Are Responding

Among the companies at the forefront of connected car security, PlaxidityX (formerly Argus Cyber Security) stands out for its unified Vehicle Detection and Response (VDR) platform. With over 70 million vehicles protected and more than 80 production projects globally, PlaxidityX offers an architecture-agnostic solution that secures the vehicle from the edge to the cloud. Their approach — combining embedded in-vehicle agents with cloud-based analytics — directly addresses the challenge of vendor sprawl that has plagued many OEM security programs.

The company’s active keyless theft prevention technology is particularly notable: an embedded agent neutralizes CAN injection and relay attacks in milliseconds at the edge, before the engine starts. This capability can be offered as a premium subscription service, transforming cybersecurity from a pure cost center into a revenue-generating feature — a shift that is reshaping how OEMs think about the business of vehicle security.

What Comes Next for Connected Vehicle Protection

Looking ahead, the convergence of AI and automotive cybersecurity promises to accelerate both offensive and defensive capabilities. Machine learning models will become more adept at identifying zero-day threats in real time, while attackers will similarly leverage AI to automate vulnerability discovery. The arms race will favor those manufacturers who invest early in comprehensive, continuously updated security platforms.

For fleet operators, the stakes are equally high. A single compromised vehicle can serve as a gateway to an entire fleet’s data and operational systems. Solutions that combine intelligent edge filtering with centralized SOC monitoring will be essential for managing risk at scale.

The era of the connected car has delivered remarkable innovation. Ensuring that innovation remains safe and secure will require sustained investment, industry collaboration, and a commitment to treating cybersecurity not as an afterthought, but as a foundational element of every vehicle that rolls off the production line.

For further reading on how the UNECE WP.29 regulation is reshaping automotive compliance requirements, consult the United Nations Economic Commission for Europe’s public documentation.

Continue Reading

Cybersecurity

What Is Shadow AI? A Complete Guide for Enterprise Security Teams

Published

on

Bar chart showing shadow AI risk statistics across enterprise organizations in 2025-2026

Artificial intelligence has moved faster than any technology governance program in history. While organizations debate AI adoption policies, employees have already decided — they are using AI tools today, with or without approval. This phenomenon is known as shadow AI, and it has become one of the most pressing security challenges facing enterprise security leaders in 2025 and beyond.

This guide explains what shadow AI is, why it happens, what risks it introduces, and how organizations can gain visibility and control without blocking the productivity benefits that AI delivers.

What Is Shadow AI?

Shadow AI is the use of artificial intelligence tools, applications, and services by employees without the knowledge, approval, or governance of the organization’s IT or security teams. It ranges from an individual pasting proprietary source code into ChatGPT, to entire departments deploying unapproved AI plugins that access sensitive customer data, to developers using AI coding assistants that capture intellectual property as training data.

The term builds on the older concept of shadow IT — the use of unauthorized software and cloud services — but shadow AI carries a fundamentally higher risk profile. Unlike a rogue SaaS subscription, AI tools actively process, analyze, and in some cases retain enterprise data. The information shared does not simply sit in an unauthorized system; it may train public models, be accessible to third parties, or persist in ways the organization cannot audit or retract.

According to research from industry analysts, more than 80% of employees now use unapproved AI tools in their work. Platforms like 

Ovalix offer organizations a dedicated shadow AI detection engine that identifies every AI tool in use across the organization — including tools employees have never disclosed.

Why Does Shadow AI Happen?

Shadow AI is not the result of malicious intent. It is almost always a productivity story. Employees encounter an AI tool that dramatically accelerates their work, and they begin using it before — or instead of — waiting for an approval process that may take weeks or months. Three factors consistently drive shadow AI adoption:

  • Approval bottlenecks: AI tools emerge faster than procurement cycles. By the time IT evaluates one tool, five new alternatives have launched.
  • Performance pressure: Employees feel competitive pressure to deliver faster results and AI tools offer an immediate advantage.
  • Policy gaps: As of 2025, more than one-third of organizations have no AI acceptable use policy in place, leaving employees to make their own judgments about what is safe.

The problem is compounded by the fact that most AI tools are browser-based and require no installation, making them invisible to conventional endpoint security tools and network monitoring solutions that were never designed to detect them.

The Security Risks of Shadow AI

Shadow AI creates risk at every layer of the enterprise security stack. The most significant risk categories include:

Data Leakage

The most immediate and measurable risk is data exposure. Employees routinely share sensitive information with public AI tools: customer records, financial forecasts, legal contracts, source code, and personally identifiable information (PII) or protected health information (PHI). In a widely cited incident, engineers at a major semiconductor company pasted proprietary code and internal meeting notes into ChatGPT, creating a data breach that could not be reversed.

Once data is submitted to a public AI model, the organization loses control of it. Depending on the platform’s terms of service, that data may be used to train future versions of the model, accessible to the service provider, or retained indefinitely.

Compliance Failures

Regulations including the EU AI Act, GDPR, HIPAA, and various financial services frameworks impose strict requirements on how organizations handle and process personal data. When employees share regulated data with unauthorized AI tools, the organization may be in violation without its compliance team ever knowing.

This is particularly dangerous in healthcare, where patient data shared with an unapproved AI application may constitute a reportable HIPAA breach. In finance, sharing non-public information with external systems can trigger securities compliance concerns.

Account and Licensing Risk

A specific and often overlooked dimension of shadow AI is the use of personal accounts to access AI platforms. When an employee uses a free personal ChatGPT or Claude account for work, the data they share is governed by consumer terms of service, not enterprise data processing agreements. The organization has no visibility, no audit trail, and no contractual protection.

The Ovalix AI security platform addresses this specifically, with the ability to detect whether employees are using personal accounts versus approved enterprise accounts and enforce policy in real time.

How to Detect and Control Shadow AI

Effective shadow AI management follows a four-stage process:

Stage What It Involves Why It Matters
Discover Identify every AI tool in use across the organization, including browser extensions and personal accounts You cannot govern what you cannot see
Monitor Track how employees interact with AI tools in real time, including what data they share Continuous visibility surfaces risks as they happen
Govern Establish and enforce an AI acceptable use policy aligned to regulatory requirements Policy without enforcement is ineffective
Educate Guide employees toward approved AI tools with real-time feedback, rather than simply blocking access Blocking drives AI underground; guidance changes behavior

The key shift organizations must make is from reactive blocking to proactive governance. Banning AI tools consistently fails because employees find workarounds. The organizations that successfully manage shadow AI are those that build a governed AI environment where approved tools are accessible, policies are enforced automatically, and employees receive guidance in the moment rather than after an incident.

The Business Case for Addressing Shadow AI Now

According to IBM’s Cost of a Data Breach report, shadow AI incidents add an average of $670,000 to the cost of a data breach. Gartner predicts that by 2030, more than 40% of enterprises will experience a security or compliance incident directly linked to unauthorized AI use. The financial and reputational stakes are no longer theoretical.

The good news is that shadow AI is a solvable problem. Organizations that invest in AI visibility and governance infrastructure now will be significantly better positioned to scale AI adoption safely — accelerating innovation rather than blocking it.

For a deeper look at how leading organizations are approaching this challenge, the 

OWASP LLM Top 10 provides a widely referenced framework for understanding AI security risks across the enterprise.

Conclusion

Shadow AI is not a future problem. It is already present in virtually every organization that employs knowledge workers. The employees using unapproved AI tools are not acting carelessly — they are using the most effective tools available to them. The security leader’s task is not to stop them, but to make the safe path the easy path: visible, governed, and compliant by design.

Continue Reading

Cybersecurity

TARA in Automotive Cybersecurity: A Complete Guide to Threat Analysis and Risk Assessment

Published

on

TARA Automotive Cybersecurity: Threat Analysis & Risk Assessment Guide 2025

Threat Analysis and Risk Assessment — TARA — is the analytical foundation of automotive cybersecurity. Required by ISO SAE 21434, referenced in UN R155/WP.29, and codified in the SAE J3061 guidebook, TARA is the process through which automotive organizations identify what can go wrong with a vehicle’s cybersecurity, how severe the consequences would be, and what needs to be done about it.

Yet TARA is also one of the most consistently underestimated activities in automotive development programs. Organizations that treat it as a documentation exercise — rather than a rigorous analytical process — produce compliance artifacts that fail to accurately characterize their threat landscape, leading to inadequate cybersecurity requirements, missed vulnerabilities, and regulatory exposure.

TARA Automotive Cybersecurity: Threat Analysis & Risk Assessment Guide 2025

What Is TARA in the Context of ISO SAE 21434?

In ISO SAE 21434, TARA is formally defined in Clause 15 (Threat Analysis and Risk Assessment) and is required at the item level — meaning for every vehicle system or component that is within the cybersecurity scope of the development program. The TARA process produces three primary outputs: a list of threat scenarios (with associated damage scenarios), a risk assessment for each scenario, and cybersecurity goals that define acceptable risk levels.

These cybersecurity goals then drive the entire downstream engineering process: requirements, design constraints, implementation guidance, and test cases. A TARA that misses a significant threat scenario creates a blind spot that propagates through every subsequent engineering activity.

The Six Steps of Automotive TARA

Step Activity Key Output
1. Asset Identification Identify vehicle assets, data, and functions Asset register with cybersecurity relevance
2. Threat Modeling Enumerate threats per asset using STRIDE/attack trees Threat scenario catalog
3. Impact Assessment Evaluate Safety, Financial, Operational, Privacy impact Impact rating per scenario (1-4 scale)
4. Attack Feasibility Assess elapsed time, expertise, equipment, knowledge Feasibility rating per threat
5. Risk Determination Combine impact and feasibility → risk value Risk matrix with prioritization
6. Risk Treatment Define treatment: Avoid / Reduce / Share / Accept Cybersecurity goals and treatment decisions

STRIDE and Attack Trees: Core Threat Modeling Methods

ISO SAE 21434 does not mandate a specific threat modeling methodology, but STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) and attack trees are the most widely used approaches in automotive TARA practice. STRIDE provides a systematic taxonomy that ensures analysts consider all relevant threat categories across each asset. Attack trees enable complex multi-step attack sequences to be documented and analyzed, which is important for ECU-level threats where an attacker must chain multiple exploits to achieve their goal.

Impact Categories in Automotive TARA

Impact Category Examples Severity Scale
Safety (S) Physical harm to occupants, road users S0 (no harm) to S3 (life-threatening)
Financial (F) Warranty costs, recalls, liability F0 to F3 (based on monetary threshold)
Operational (O) Vehicle unavailability, function loss O0 to O3 (based on scope of disruption)
Privacy (P) Personal data exposure, tracking P0 to P3 (per GDPR severity categories)

TARA Automation: Why Manual Processes Fail at Scale

Modern vehicles contain 100+ ECUs communicating across multiple network domains. A single vehicle program may require TARA analyses for dozens of items and components, each with hundreds of potential threat scenarios. Performing this work manually in spreadsheets creates consistency problems, traceability gaps, and significant rework burden when designs change.

Automated TARA tools that maintain structured asset-threat-risk linkages, propagate design changes to affected analyses, and generate auditable compliance evidence reduce both cycle time and error rate by an order of magnitude compared to manual methods.

PlaxidityX’s Security AutoDesigner is purpose-built for automotive TARA automation, with structured support for ISO SAE 21434 Clause 15 processes, attack tree construction, and automatic traceability from threat scenarios to cybersecurity requirements. For a blog-level introduction to TARA in risk management, PlaxidityX’s guide to automating automotive cybersecurity risk management provides practical context.

TARA in the Supply Chain: Sharing and Integrating Analysis

A persistent challenge in automotive TARA is that OEMs and suppliers each perform analyses that must ultimately be consistent with each other. When an OEM’s TARA identifies a threat to a supplier-provided ECU, the supplier’s own TARA must either address that threat or explicitly accept the residual risk at the organizational interface. ISO SAE 21434 Clause 5 (Distributed Development) defines the contractual and technical obligations that govern this handoff.

Further Reading

The SAE J3061 cybersecurity guidebook provides the foundational threat modeling guidance that ISO SAE 21434 builds upon. For independent coverage of TARA methodology developments, AllTechNews on automotive cybersecurity analysis tracks industry practice and tooling.

Conclusion

TARA is not a one-time compliance activity — it is a living analytical process that must be maintained as vehicle designs evolve, new vulnerabilities are discovered, and threat landscapes shift. Organizations that invest in structured, automated TARA processes produce better security requirements, pass regulatory audits more efficiently, and build a genuine organizational memory of their cybersecurity risk posture across programs and generations of vehicles.

Continue Reading

Trending