Values-Based AI Polices: Balancing Innovation and Responsibility

Matti NeustadtMatti Neustadt
Written by
Matti Neustadt
and
-
April 12, 2024

Loved this article? Share it with your network:

Have you heard about the lawsuits against Google and others for their AI systems? Are you buried in headlines extolling how AI will cause the collapse of civilization and watching half of your colleagues flit around the Internet like kids in a candy store, looking for new “free” tools to write their code, record and summarize their meetings, and otherwise lighten their workload, all while the other half announces the new AI product that the company has just released?  And now leadership has asked you to develop a corporate AI policy that will simultaneously appease the sales, development, security, and compliance teams.  Do you feel overwhelmed by such a task?

This article aims to put you at ease by providing a path for developing an enforceable, growth-oriented AI policy designed to enable a business to use AI while still aligning with core values and legal requirements.  The keys to long-term success: (1) Start where you are; (2) Never stop learning.

1. Start Where You Are

Know the Business

While this may be surprising coming from a lawyer, you do not necessarily need to start with the various statutes and regulations being adopted for AI.  Instead, start with a good understanding of the business – both in terms of existing policies on which you can rely, and on the business roadmap for the development, use, or acquisition of AI systems.   This business knowledge will provide you with important background information regarding where AI will have the biggest impact (and therefore the greatest risk), which will help you prioritize policy controls.  It will also identify the key personnel that need to be involved in setting policy and practice.

Knowing your business includes not only knowing the business roadmap and operations, but also knowing the technical controls that may already be in place around your information assets.  Perhaps you have ISO 27001 controls in place or have even started implementing ISO 42001 controls on AI management systems.  But don’t get too wrapped up in a desire to obtain a certification. While certain high-risk AI systems may soon be legally required to meet mandatory technical controls in the future, the responsible use of AI can be accomplished without it.  Don’t get me wrong - these standards can be useful in identifying and setting controls in a structured way – but focusing on a certification rather than a sound, scalable policy could quickly deplete limited resources and force concessions that can otherwise put the business at risk.

2. Know Your Values

Values-based principles – including those set by law and those set by your organization – are key to a scalable compliance program.  If you have not defined your organizational values, now is a great time to do so. They are very useful when building risk-based legal compliance programs because they can help you prioritize limited resources and gain stakeholder buy-in based on agreed upon values. 

In addition to organizational values, many countries around the world have identified core values that the law will consider when determining if an organization acted responsibly.  You are likely familiar with the concept of legal principles, as itis integral to the GDPR, which directly adopts the fundamental principles of privacy into regulation.  In other instances, such as with AI, the concept of legal principles may be used indirectly, such as in determining the legal standard of reasonableness.

There is no single source of legal principles for responsible AI.  But the variety of stated principles has significant overlap. And over time, these principles are likely to stay the same, even as statutory laws evolve.  Aligning an AI policy to these principles – instead of chasing individual statutory requirements – will put you in a better position to both grow your business beyond a single jurisdiction and to update your operations more easily as the law changes. Aggregated, these principles include:

Lawful

AI systems should be developed and used only with lawfully obtained data and in a manner consistent with applicable legal requirements for the entity.  This includes not only statutory laws and government regulations – like the GDPR and AI Act – but also contracts, which act as a form of private law.  This means that if your organization has contracts that limit how you use data, that contract can be considered a legal restriction on the use of the data for AI system training – and your AI policy should address that.

Reliable & Secure

No AI can be responsibly used unless it is reliable and secure, otherwise, it will simply cause more problems than it solves. Existing procurement and IT policies are your friend here, but also consider whether existing policies need revision or need to be supplemented with new requirements. For example, it is not uncommon for various corporate policies to apply only to items only above a certain cost threshold. With so many “free” AI solutions that are paid for by training data, this may need to be revised if you need to protect company trade secrets or copyrights. The same goes for security policies, as you do not want an AI tool that can be used to break corporate security requirements.

Fair & Unbiased

AI systems can never be better than the data they are trained on. Because of this and the societal biases that exist in the world, it is imperative that AI systems are controlled for human-centered values such as fundamental freedoms, fairness, the rule of law, equality, and any other corporate values that you have identified. Broadly accepted controls for this include things like ‘human in the loop’ oversight – where AI is supervised by qualified humans – and first- and third-party feedback loops to report issues in both developed and purchased AI solutions. It is important here to remember that human oversight needs to be appropriate for the role. Consider the AI system to be a perpetual junior trainee at your office – they need a qualified supervisor to ensure they are doing the job well.

Transparent

Those developing, selling, or even just using AI systems should be fully transparent about it. That means disclosing when users are interacting with, or relying on results generated by AI systems, and provide them with information on how predictions, recommendations, and decisions are made.  This is one area where the law is prescriptive in some areas: you must inform people when they are interacting with AI and not a real human, such as with an AI chatbot online.

Accountable

A lot of fancy corporate speak surrounds the principle of accountability, but all it really means that there must be a human (or human-run enterprise) to blame if an AI system fails to perform responsibly.  All actors in the AI technology chain must be accountable for their actions in accordance with their role.  This can mean anything from diligence in choosing vendors providing AI solutions to implementing a full set of ISO controls, all depending on the business and risk tolerance of the organization. Which is why the top of this list remains “Know Your Business”.

3. Never Stop Learning

Accept that it will be hard

It is impossible to start any journey as an expert. We all must learn to crawl before we walk and walk before we run. And yes, there is an entire industry already claiming to be experts in AI law.  But you know what? They aren’t.  No one is.  I’m not.  The law is too nascent in this field.  Even those with great transferable skills – legal and practical experts on data protection, IT, intellectual property, administrative procedure, contracts – must accept that this journey will require effort and commitment to learn.  Professionals with transferrable skills may make assumptions and predictions on how AI law will move forward, but we have no crystal ball telling us the future – we are likely to be wrong at least once.  But growth comes from mistakes, and we must continue to accept our discomfort as we grow.  Keep a growth mindset and desire for continuous learning despite the discomfort. Then draft an AI policy that can do the same.

Develop a policy with a growth mindset

A growth mindset is generally stated as a frame of mind that your talents and capabilities can and will improve over time. Consider your AI policy as if it is a person, and that person has a growth mindset.  Include learning opportunities, admit and be ready to change the policy as you learn. Admit that responsibilities and legal liability for technology is changing, and that change is uncomfortable and hard – you will make mistakes.  Design your policy to be able to recognize mistakes and improve as you learn.

Perhaps you are a bit less worried now?  Does this look a bit more manageable?  Great!  Except now I will firmly replace my legal hat and give you the truly difficult task: maintain evidence. Knowledge of your business and its values and acting on that knowledge can be called into question if you do not keep evidence of it. This is the hard part.  Show your work.  Write it all down: the policy, the stakeholders, the governance structure, the review cycle, the values, the principles. Circulate it within the company. Take feedback. Finalize a written policy with a review cadence.  Set up regular review meetings – they don’t have to be frequent (I like quarterly to start), but they should happen regularly.  Use this opportunity to review all the feedback you received because you built a human-values centered AI policy and update it even if it means criticizing your past work. This is the discipline of learning and growing. It is what the phrase “upon further consideration” was made for.  It is the hard part, but it is the part that pays off the most in the end, whether the end goal is to avoid non-compliance penalties or simply get the best buy-out price for your start up in an acquisition. Documentation is where you prove that you live your values and those values are worth investing in.