Revealed: bias found in AI system used to detect UK benefits fraud!

An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.

An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year.

The emergence of the bias comes after the DWP this summer claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.

This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”.

But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal.

Campaigners responded by accusing the government of a “hurt first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system.
It is clear that in a vast majority of cases the DWP did not assess whether their automated processes risked unfairly targeting marginalised groups,” said Caroline Selman, senior research fellow at the Public Law Project, which first obtained the analysis.

“DWP must put an end to this ‘hurt first, fix later’ approach and stop rolling out tools when it is not able to properly understand the risk of harm they represent.”

The acknowledgement of disparities in how the automated system assesses fraud risks is also likely to increase scrutiny of the rapidly expanding government use of AI systems and fuel calls for greater transparency.

By one independent count, there are at least 55 automated tools being used by public authorities in the UK potentially affecting decisions about millions of people, although the government’s own register includes only nine.

More news at:  https://uk.yahoo.com/news/revealed-bias-found-ai-system-050047700.html

bias found in AI system used to detect UK benefits fraud! An artificial intelligence system!

640 × 384 — WEBP 33.1 KB

Uploaded to 2 weeks ago — 51 views

Revealed: bias found in AI system used to detect UK benefits fraud!

An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.

An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year.

The emergence of the bias comes after the DWP this summer claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.

This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”.

But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal.

Campaigners responded by accusing the government of a “hurt first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system.
It is clear that in a vast majority of cases the DWP did not assess whether their automated processes risked unfairly targeting marginalised groups,” said Caroline Selman, senior research fellow at the Public Law Project, which first obtained the analysis.

“DWP must put an end to this ‘hurt first, fix later’ approach and stop rolling out tools when it is not able to properly understand the risk of harm they represent.”

The acknowledgement of disparities in how the automated system assesses fraud risks is also likely to increase scrutiny of the rapidly expanding government use of AI systems and fuel calls for greater transparency.

By one independent count, there are at least 55 automated tools being used by public authorities in the UK potentially affecting decisions about millions of people, although the government’s own register includes only nine.

More news at: https://uk.yahoo.com/news/revealed-bias-found-ai-system-050047700.html

Customize upload by clicking on any preview
Customize upload by touching on any preview
Uploading 0 files (0% complete)
The queue is being uploaded, it should take just a few seconds to complete.
Upload complete
Uploaded content added to . You can create new album with the content just uploaded.
Uploaded content added to .
You must create an account or sign in to save this content into your account.
No file have been uploaded
Some errors have occurred and the system couldn't process your request.
    Sign up to be able to customize or disable image auto delete.
    or cancelcancel remaining
    Note: Some files couldn't be uploaded. learn more
    Check the error report for more information.