Back to Blog

AI systems possess enormous potential for tremendous good and tremendous harm. Developing AI safely and preventing this new technology from being used to do harm will be a serious challenge for the 21st century. The ability to “train” AI systems by exposing them to specific data introduces vulnerabilities for abuse if adequate ethical safeguards do not exist. While complex factors drive AI behavior, unfettered access to these tools could allow unscrupulous actors to create biases and defects intentionally. 

However, protecting AI becomes harder in a global context. As the United States grapples with creating ethical frameworks to guide AI development, authoritarian regimes like China and Russia pursue AI dominance unfettered by moral concerns about potential downsides. The US must lead an international and comprehensive effort to develop AI in a safe and sustainable manner. If not, we’ve already seen bad actors can use AI to impersonate people to commit fraud. It’s easy to imagine AI potentially being used to attack critical infrastructure or facilitate financial crimes. 

The recently issued White House Executive Order on AI demonstrates an appropriate level of caution. The order focuses entirely on fostering “safe, secure, and responsible” AI that protects civil liberties, and guarding American AI technology from malign foreign influence ranks as a top concern. The executive order, alongside legislation like the CHIPS and Science Act, imposes security rules and compliance standards for companies developing sensitive technologies. The Committee on Foreign Investment in the United States (CFIUS) also gained newly-expanded authority to review foreign investments in tech for national security risks.

So how can the integrity of American AI systems be protected from tampering by hostile powers? While crowdsourcing data from diverse sources often improves AI, leaving systems entirely open invites needless risk. AI platforms should have security analogous to financial networks, which restrict access to sensitive data and machinery.

AI is changing our technological ecosystem like the digital revolution in the early 2000s. As banking, shopping, and other services went digital, the government had to create different views on regulations, risks, and how customers’ identities were verified. We can take these lessons to prepare for risks and regulations that may be coming for AI.

Fortifying AI systems likely requires identity verification, know your customer (KYC) best practices, activity monitoring, and sanctions blocking that banks use to control risk. AI developers working earnestly to promote beneficial technology can learn much from financial services companies in these areas. Best practices should include:

  1. Comprehensive risk assessments identifying vulnerabilities in processes and code.
  2. Written policies and controls that formally address known risks.  
  3. Alignment of business processes with formal risk management policies.
  4. Rigorous auditing procedures to validate control effectiveness.  
  5. Continuous feedback loops to enhance policies and systems over time.

By taking system access security seriously and applying disciplined controls, the tech world can reduce the risk of intrusion or undue influence on AI systems and prevent them from being used for nefarious purposes. Wise governance now can help guarantee AI remains a blessing rather than a curse for humankind.

The journey is only beginning. How we protect emerging technologies like AI will have long term effects but ultimately protect the ecosystems.

 

Debra Geister
Posted by

Debra Geister

Debra Geister

With more than two decades of experience in the banking compliance and anti-money laundering industries, Geister is a recognized leader in the financial crime detection field. She has worked with many of the largest financial institutions as well as technology and data companies, both global and domestic, to help eliminate and reduce money-laundering, fraud, and related financial risks.