Love it or hate it, artificial intelligence is here to stay, and it’s playing an increasingly important role in housing. The Dentons team - Michael Park (Partner) and Antonia Hudson (Senior Associate) - provide us with a legal update on the use of AI for the Australian community housing sector, and AI governance tips for leaders of all organisations.
Artificial intelligence (AI) is a valuable tool for Community Housing Providers (CHPs) to deliver high-quality, efficient services for the benefit of their tenants. However, it is critical that CHPs understand the complex legal and regulatory risks associated with AI and implement safeguards to protect their tenants, employees and organisation. This will help ensure CHPs use this new technology in a responsible manner aligned with community expectations.
Potential of AI
AI is already being used by CHPs to increase efficiency, innovation and assist with the digital transformation of key housing services. Harnessing the analytical power of AI can streamline site selection and tenant allocation, provide predictive maintenance and support the management of community housing.
AI can also enhance a CHP’s ability to engage with individuals based on their specific needs. The processing power of AI has unlocked personalisation at unprecedented levels. The technology can be leveraged to make services accessible to people with disabilities or different ethnic backgrounds; for example, by changing text to pictures or translating into other languages.
However, with exciting new possibilities also come new challenges and risks from both a legal and operational perspective.
State of AI regulation
One challenge posed is understanding how this revolutionary technology interacts with the existing legal landscape in Australia.
Australia does not have overarching legislation targeted at governing AI. Instead, there is a patchwork of existing legislation that may apply to AI use depending on the type of potential harms that may arise. Privacy laws, anti-discrimination laws, consumer protection laws, work health and safety laws, among others, may all apply to the deployment of AI technology.
Australia’s voluntary AI Ethics Principles were published in 2019 to assist organisations in strategies for the ethical adoption of AI. Concurrently, the Australian AI regulatory landscape is changing to address the harms that unbridled use and development can cause, while also balancing innovation. In the past year, there has been a number of key developments in regulating the use of AI in high-risk scenarios.
Firstly, a key change to the Privacy Act 1988 (Cth) enacted last year will require organisations from late 2026 to include information in their privacy policies about decisions made using personal information with AI that could significantly affect an individual’s rights or interests. CHPs using or planning to use personal information in AI systems to make decisions—for example, that determine a person’s eligibility for housing, subsidies or other benefits, allocate resources, triage maintenance and repair work or determine rental payment increases—should take steps now to ensure they’re ready to comply with this obligation when it comes into force next year.
In September 2024, Australia published the Voluntary AI Safety Standards and Proposed Guardrails for the Mandatory Use of AI in High-Risk Settings. As the name suggests, compliance with the Voluntary AI Safety Standard is not mandatory. However, the guardrails do serve as useful guidance to organisations seeking to pursue responsible adoption of AI.
If the mandatory guardrails (which closely align with the voluntary standard) are made into law, they will regulate high-risk settings. The proposal includes potential indicators of high-risk use cases, including AI systems used to determine access to essential public services. This is likely to include access to community housing.
ACNC guidance on governance duties and AI
CHPs will ordinarily be registered as charities with the Australian Charities and Not-for-profits Commission. Directors of CHPs may have responsibility for ensuring compliance and managing AI risk arising from their legal duties under ACNC Governance Standard 5 (which is the key statutory obligation that sets out governance duties effectively imposed on ‘responsible persons’ of charities—a charitable CHP’s responsible persons will include its directors/committee members/ governing body members) and under the general law. A breach of these duties can result in the personal liability of a director.
The ACNC’s Guidance on Charities and Artificial Intelligence (which provides general guidance on charities’ use of AI, that all charitable CHPs should read) points out three duties in ACNC Governance Standard 5 that charitable CHPs should consider when using AI:
- The duty to act with reasonable care and diligence, including the duty to ensure privacy and confidentiality, and maintain appropriate cyber security safeguards;
- The duty to ensure that a charity’s financial affairs are managed responsibly – which may encourage the effective use of AI, to improve efficiency; and
- The duty to act honestly and fairly in the best interests of a charity and to advance its charitable purposes—which may encourage charitable CHPs to use AI in order to further their charitable purposes.
So, what to do?
An understanding of the AI technology used by the organisation, including its limitations, is key for directors to be able to interrogate proposals and assess whether there is any foreseeable harm to the organisation. The complex nature of AI and rapidly changing technological developments makes this increasingly difficult for directors to achieve. With AI tools becoming more and more accessible, affordable or freely available, even being aware of how members of an organisation are using AI in their work is a challenge.
Responsible use of AI that does not infringe Australian laws requires an understanding of the risks arising from AI use, the limitations of the technology and taking steps to ensure the responsible implementation of AI tools.
Risks
While each AI technology and its particular use will give rise to different risks of harm, there are a few risks that are important to keep in mind when considering an AI project.
- Algorithmic bias
AI systems have the potential to result in unfair and biased outcomes and, in some cases, unlawful discrimination.
The risk of bias is of particular relevance to CHPs engaging with marginalised groups. Biased results can occur in a number of ways, including the design of the model and the data it is trained on. If an AI model has been trained on historical data sets, it can reproduce and exacerbate bias inherent in previous decision-making. For example, if historically decisions about housing favoured some groups over others, the AI may learn this pattern and perpetuate this bias. Alternatively, if certain groups are underrepresented in the training data, then the predictive capabilities of technology may be less accurate in relation to those groups compared to others. - Lack of transparency and explainability
AI decision-making is often opaque, and it is not always possible to determine what data was used to create the output or how a decision was made.
In some cases, it is not possible to determine how an outcome was produced or what reasoning was relied on. This makes it difficult to have confidence in the accuracy or reliability of the results. Individuals who are the subject of AI decisions, which cannot be justified, may not have adequate grounds to challenge the outcome. - Hallucinations
Although the processing capabilities of AI can seem superhuman, they are not without fault.
AI tools, particularly large language models, have been known to ‘hallucinate’. An AI tool may produce outputs that are inaccurate or manifestly wrong. Caution should be taken when relying on information provided by AI, particularly in relation to essential services, to ensure users are not misled. - Privacy and confidentiality
In some respects, AI tools available on the market are like any other third-party software and should be subjected to the same thorough due diligence to ensure any confidential information is kept secure and personal information is used in accordance with applicable privacy laws.
However, AI poses a unique challenge to protecting privacy and confidentiality. Unlike traditional systems where data may be returned or destroyed, once an AI model has ingested information, it may not be able to unlearn and control over that information may be permanently lost.
The legal terms of many popular generative AI tools warn against sharing commercially sensitive or private information because the model learns from the inputs provided. If inputs become part of the knowledge base of a generative AI tool, there is a risk that information can be disclosed in an output as well.
It is important for CHPs to ensure confidential, personal and sensitive information is not used with AI technology without appropriate contractual and technical protections in place.
Managing AI risk
Essential to managing risks arising from AI is an effective governance framework, accountability and leadership. Management and directors play an essential role in fostering a culture of compliance and awareness.
The importance of good governance is highlighted by the fact that it is the first mandatory guardrail proposed by the Australian Federal Government—organisations should establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
An effective governance framework includes the following core components:
- Policies, practices, processes and strategies
Boards and leadership should set the organisation’s vision, values, strategies and risk tolerance for AI—and these should be recorded, documented and distributed within the organisation. To support these overarching goals and principles, the organisation must have documents, policies and procedures in place (e.g. privacy policies, AI use policies, etc). - People, skills, education and culture
Responsible use of AI relies on engagement, buy-in and responsibility of every person in an organisation. Training and education are essential to ensuring employees, leadership and the board have the requisite knowledge and skills regarding the implementation and use of AI to address any knowledge gaps. - Roles, responsibilities and governance structures
Clear roles, defined responsibilities for employees, management and the board of directors, and accountability for key responsibilities are needed to achieve effective oversight. - Monitoring, reporting and evaluation
It is essential to ensure policies, procedures and accountability mechanisms are operationalised. It is not enough to have the well-documented plans that live in a drawer gathering dust. Instead, organisations must ensure through monitoring, reporting and evaluation that AI used within their organisation is properly implemented, and adapted for changing laws and risk environments.
5 questions leaders of CHPs can ask themselves now
- How is AI currently being used in your organisation? Has the AI technology been vetted prior to implementation?
- Who is accountable for managing and monitoring AI risk, and do they have the requisite skills? What governance structures do you have in place in your organisation?
- Based on your organisation’s use of AI, what are your key AI risks—regulatory or otherwise?
- Are appropriate safeguards for vulnerable groups in place?
- What AI policies and procedures do you have? How are these communicated and enforced?
Wherever your organisation is on its AI journey - whether you are new to these tools or well-versed in their capabilities - managing AI risk requires and an ongoing effort to keep abreast of technological change, an evolving regulatory landscape and ensure your organisation is responsibly using AI.
Share This Article
Other articles you may like
