-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Best practices for proactive enterprise risk management: “Let the (big) data tell you”

Article Featured Image

Risk detection
data management

Bennett indicates that the initial step for proactive risk assessment is to attain centralized access to all relevant data for specific use cases (cybersecurity, regulatory compliance, etc.). There are various methods for doing so, including linking data with graph technologies, leveraging enterprise search or data federation tools or even deploying comprehensive data fabrics. The data doesn’t necessarily need to be in one location but should be accessible from a single view—much like the quintessential 360-degree customer view. That step is particularly critical in the public sector given the proliferation of silos in areas such as law enforcement.

Bennett mentions the effectiveness of the state of North Carolina’s Criminal Justice Law Enforcement Automated Data Services (CJLEADS): “It’s a system that pulls together in one pane of glass all of these different 109 data feeds for law enforcement—everything from sex offender registry to DMV to wildlife licenses. [It’s] every piece of law enforcement data put together.”

By corralling data for specific risk domains with unified access to them, organizations not only increase their understanding of business objectives but also significantly decrease costs. According to Bennett, “You can go online and read the State Controller’s reports about CJLEADS. They estimate $13 million dollars a year in savings from efficiency and time, not having to dig through massive amounts of databases. North Carolina estimates on average about four officers’ lives saved per year in the state.”

Risk network generation

Once the data management component is in place, proactive risk management requires assessing the links between data elements and determining “which of these are most important out of these connections and which would tell us what we should focus on in one place instead of another,” Bennett explains. A variety of techniques can be used including:

♦ Deterministic mapping—Deterministic data mapping is based on relatively simple one-to-one relationships between nodes. There’s a direct correlation between data elements or their metadata. In law enforcement, for example, Bennett notes, “If you know this address is the location of all of these reports and it’s also the location of a particular suspect, that’ll create a link.” Companies can use the same approach when evaluating the importance of IP addresses for cybersecurity or other risk domains.

♦ Probabilistic matching—Probabilistic matching is based on determining the likelihood that nodes are related, typically by using attributes of data and their metadata to identify similarities. In this case, users are able to ascertain “what are the likely links, even if you haven’t seen them before,” Bennett says. This technique is useful for establishing identities of users interacting with organizations from different devices and networks; probabilistic matching can issue those benefits for fraud detection as well.

♦ Algorithms—Whether basic algorithms or those devised from classic or advanced machine learning models, they can determine relationships between data to indicate their importance to specific risk problems. Algorithms can identify “the shortest path of certain parts of a network or which pieces of a network are the biggest influencers, [like] companies laundering money for terrorists,” Bennett says. This approach is valuable for financial service institutions complying with anti-money laundering mandates.

Deploying those methods within systems designed to mitigate particular areas of risk is essential for high-stakes use cases such as regulatory compliance or law enforcement. Equally important is the automation of those connections for optimal user experiences “for operational folks, not necessarily data scientists,” Bennett says.

Predictive investigations

Creating the risk networks for specific domains is an integral aspect of proactively detecting potential threats. Mitigating risk hinges on timely, analytics-infused threat investigation. Bennett explains, “Intelligence and investigation is the piece that uses analytics to help make policing more proactive and less reactive.” Predictive analytics produces much of the same effect for managing enterprise risk, especially when alerts notify companies of risk prior to loss (or downtime or regulations violations, etc.). Whereas artificial intelligence (AI) plays a modest role in generating graph-like connections between data elements in risk networks, machine learning prowess is considerably more influential in “the alerting by telling law enforcement agencies [or private sector organizations] what might be most important” for specific occurrences or threats, Bennett advises.

Predictive analytics affects proactive risk management investigations in three key ways:

♦ Rules-based alerts—This method of alerting relies on basic algorithmic AI in which humans define the conditions in which alerts are generated. For instance, IT teams could create certain metrics for network traffic which, when exceeded, trigger alerts to switch data and their processing to alternate systems for high availability. Whether humans are creating the rules or more sophisticated machine learning is responsible for them, Bennett says, “If you’re an … organization you want the system to be able to, as it’s gathering information, build alerts based on criteria you think are important and surface those alerts when the particular issue or those events are bubbling up to the top and they warrant attention.”

♦ Machine learning alerts—Organizations can reap even greater value from AI by enabling intelligent machine learning algorithms to dictate when and how alerts are issued. According to Bennett, doing so maximizes the merit of predictive analytics because “it’s better to let the data tell you what those [alert] categories should be and surface them in an optimal way. That’s where machine learning and artificial intelligence can provide a lot more benefit.” For instance, organizations attempting to comply with the General Data Protection Regulation can use the method to identify which customers are European Union citizens and which of their data is personally identifiable. Doing so could trigger alerts to implement relevant lifecycle management measures or deter marketing to customers without their consent.

♦ Alternative analysis—Because prescriptive analytics is generally considered the highest stratification of analytics, there are certain cases in which automated action is less appropriate than timely recommendations. “Analytics is best used when it cues up insights for humans to incorporate in the rest of their decision-making process,” Bennett says. “It should never replace the decision-making process.” Predictive or prescriptive analytics is helpful with what Bennett terms “alternative analysis” in which, “you could say here’s a range of alternatives for how you could deploy operational staff given what you’re worried about; here’s what the system is saying are the best ways to achieve that.” In the private sector, for example, threat management systems might detect the presence of ransomware. Advanced analytics could then offer an array of recommendations to lessen any damage.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues