The Dutch Central Bank released a discussion paper on general principles on AI in Finance which you can read here. On the surface, it seems rather well thought out. But as it is a discussion paper, there’s ample room for discussion…
Starting with the decades-old big error of apparently haphazard classification of risks. When the classification isn’t mutually exclusive and collectively exhaustive, one loses the ground for hope of any suitable quality of subsequent classification(s). For example, this happened with the operational risk classification in the Basel-II framework, left dangling in versions ‑III and ‑IV for apparent reasons. And now again, we see in the principles, quite some overlaps and double counting.
The principles have been grouped to backronym into Safest: Soundness, Accountability, Fairness, Ethics, Skills, Transparency. Already there, one can debate ad infinitum – and yes I suggest to those that want to take the DNB paper seriously to do so. Since, as backronyms go, trying to find a word and some content to that word just so you can use the first letter in an acronym, isn’t the sort of rigour one would expect of an agency that acts of fact as lawmaker, executive power, and judiciary, even overriding national and international law at occasion.
The principles, then, as outlined in the discussion paper. That are tell-tale formulated as orders, not principles it seems, but we’ll leave that aside.
- Ensure general compliance with regulatory obligations regarding AI applications.
- Mitigate financial (and other relevant prudential) risks in the development and use of AI applications.
- Pay special attention to the mitigation of model risk for material AI applications.
- Safeguard and improve the quality of data used by AI applications.
- Be in control of (the correct functioning of) procured and/or outsourced AI applications.
Nothing here that needs discussion as it's all generic for decades in IT land. Rather, if you'd need to point out these as principles in this context, the suggestion is that the principles haven’t been implemented for all the trillions of LoC now defining the finance industry. That might be a bigger problem than the young upstart AI deployments. And, if there would be a reason to re-iterate now instead of always: What is different with AI systems ..?
Hence, the S can be dropped; transferred to some general handbook of how to do decent IT and IT management.
- Assign final accountability for AI applications and the management of associated risks clearly at the board of directors level.
- Integrate accountability in the organisation’s risk management framework.
- Operationalise accountability with regard to external stakeholders.
The same comments apply. Even if the old 3 Lines of Defense (that aren’t!) would still be something to strive for, that too may go the way of the generic IT handbook. And hopefully, risk management in finance has moved much beyond 3LoD thinking onto actually effective risk management methodologies.
- Define and operationalise the concept of fairness in relation to your AI applications.
- Review (the outcomes of) AI applications for unintentional bias.
Suddenly, AI-specifics come into play… Certainly, the ice of principles/prescriptions gets rather thin. The “concept of fairness” you say ..? You mean, according to this pointer to inherent (sic) inconclusiveness ..? When the fairness to the shareholders undeniably prevails still in finance, one better formulate principles of fairness that would be recognisable in society. Now, it appears that whatever organisations in finance do, they can get away with due to fairness – towards whomever they consider. This may be daily praxis, but shouldn’t be; hence: reformulate. Then, however, today’s Gutmenschen hobbies will creep in. Now what?
Also: Unintentional bias? Good. Intentional bias is in, then. Over the same arguments, leading to the same conclusion. The idea that a human in the loop would be any good, or have the slightest modicum of effectiveness (towards what?), has ben debunked already for such lengths that it seems someones have missed the past thirty years of discussions re this.
- Specify objectives, standards, and requirements in an ethical code, to guide the adoption and application of AI.
- Align the (outcome of) AI applications with your organisation’s legal obligations, values and principles.
This is a nifty tongue-in-cheek slipped in. Ethics in finance ...! Given the smirk that society rightly gives the “bankers’ oath”, such an ethical code surely will miss the mark. As it appears within the 17 principles, it’s almost an ‘if all else fails, fall back on some oath’ hail Mary statement. When one drafts principles, they should be that, not prescriptions plus an escape clause into vagueness.
And the alignment part is easy, once one sees that “anything that's not explicitly illegal” will be pursued to the values and principles of bonus maximisation through short-term profit maximisation. No, that is a fact, still. If you don't see that, refer to the previous.
- Ensure that senior management has a suitable understanding of AI (in relation to their roles and responsibilities).
- Train risk management and compliance personnel in AI.
- Develop awareness and understanding of AI within your organisation.
Again, nothing new. Again, as with the S and the A, this hasn't happened anywhere in the finance industry yet, so why not fix this first and only then see whether AI changes anything. Quite surely, these principes will go the way of the generic handbook, then.
- Be transparent about your policy and decisions regarding the adoption and use of AI internally.
- Advance traceability and explainability of AI driven decisions and model outcomes.
Ah, finally, things get interesting. Note the aspirational explanation that is in the discussion paper. But this leaves all one would actually want to discuss, out here, unfilled of proposals or lines of thought.
Which is why the discussion paper is a 0.3 version at best. Almost all of it is a summary of how things should have been for decades [remember, banks were early adopters of 'computers'] but apparently (and known from first-hand practice) weren't and aren't, with a minimal tail of actual discussion items. If this were meant to just launch the backronym, one should've used a one-pager. If it were meant to launch principles for AI deployment in Finance, the small useful part should be clipped out and into a much broader discussion.
Oh well. My 2¢ – what’s yours ..?
 As if anyone would need it – society runs on the implicit contrat social already for centuries and longer. If one needs a specific oath, something terribly specific would have to be the case, and specific implications as well. E.g., the military oath. I have pledged the officers' oath already, and would be very severely insulted if the suggestion would be to have laid that aside like a coat upon leaving active service to the constitution, or that it would not apply to some cowardly job like in Finance.