Working Paper II: Risk and design

Photo by Andy Kelly on Unsplash

How to innovate with AI and technology in a risk-conscious way

Law school makes you fairly conscious of risk-based thinking; always encouraging you to give balanced consideration to how different paths may create different impacts for people you are advising. Of all the skills to bring over into the innovation field, I feel like this has been one of the better ones to come with me. 

Last year, in our first Working Paper I, OpenUp introduced our musings on how human-centred thinking combined with social-impact-centred thinking might be combined to inform considered design for positive social change. As a continuation of some of that thinking, OpenUp has also explored how to embed within our thinking and design processes appreciation of the potential negative risks and externalities that might arise from how we innovate. 

Risk-based thinking is often used in organisational design. You can see it in action, for instance, in OpenUp’s consideration of funding and financing options for civic tech entities. But it is not just an important idea from constraining business practice, but for affecting design practice as well. In a context where we are actively attempting to influence change, how do we help ensure this change is good?

Some of my favourite work on thinking about AI and impact has been done by lawyers (we’ll do another blog on cognitive bias later in our series). In 2017, many moons before the hype cycle and marketing teams began yelling about ‘existential AI threats’, work in the regulation of AI was considering how to categorise and consider risks and harms. Petit for instance provided the following “Typology and Examples of Externalities” for AI and robotics:

The research shown above was looking to try and categorise risks in the context of assigning liability (a favourite task of law). When we think about risk for our purposes, we are far more concerned with using this thinking to prevent risks practically within an innovation environment, with multiple moving parts. In the world of research and policy in AI, the world of “ethics” is the mechanism through which normative concerns are sought to be driven into products. AI research has been calling within this world for different kinds of formal risk assessments to be adopted into product building prior to its development, and after its deployment, to ensure ‘good’ AI in design and outcome. You may have read, for instance, of privacy risk assessments, which laws like the Protection of Personal Information Act or the European Union’s General Data Protection Regulation oblige those processing personal information to perform.

But sectoral risk assessments like that are largely merely about compliance, and are quite narrow. Continuing from research of Crawford and Calo, OpenUp agrees that innovation should seek to embed values within the developing organisation and project, and can benefit too from conducting thought experiments to plan scenarios (you’ll see in Working Paper I in fact how we put in practice some “problem statements” in that vein to assist in our problem idenitifcation). However, we still wanted to find a way to embed consideration of risk which can the embrace the social, economic and political complexity in which we actually launch our innovations in varied kinds of risk assessment:

“A social-systems analysis needs to draw on philosophy, law, sociology, anthropology and science-and-technology studies, among other disciplines. It must also turn to studies of how social, political and cultural values affect and are affected by technological change and scientific research. Only by asking broader questions about the impacts of AI can we generate a more holistic and integrated understanding than that obtained by analysing aspects of AI in silos such as computer science or criminology”.

So how might we implement this world of risk in practice? One of the primary ways is to specifically adapt your existing tools to try and account for these concerns. Below is Open’s project diagram:

OpenUp's Design Process, (c) 2023

In green you will see we have highlighted the different kinds of risks that can be focused on at specific design points. Impacts include the positive and negative externalities of what you do, and risks merely refer to the negative ones. In unpacking risks, there are of course risks that are specific to projects; for instance, many non-governmental organisations will be familiar with trying to outline these project-specific risks in their proposal (and outlining mitigation plans) because of the question structure of many funding proposals.

But particularly when we work with beneficiaries (as users), we have to think how both our project implementation might incur risks, and how the introduction of our innovation itself might also introduce risk. For example, how might implementation of our project present risks to users just through participation: would their attendance in our user workshops be to their detriment in any way? Looking at our innovation intervention, how might using our product present risks to a user's privacy? This is particularly pertinent in the introduction of adapted technologies with AI-as-a-service or software-as-a-service. 

Turning to context, in our Working Paper I we emphasised how an important aspect of our work was appreciating that design in the context of social change should not just be cognisant of individual users, but the communities in which these users exist and exert their agency. When we work on context, an emphasis on community social dynamics, and how these communities fit within broader political, social and economic structures, can not only help to create better design - but needs to be reflected on when determining risks. For example, how might working with a community on political participation be fractious in a context where there are very low levels of community-government trust?

Importantly too, as a final step through our design process, reflecting on impact (and risks) within our points of building and evaluating helps us to consider risks and then introduce strategies to mitigate those risks in the next phase of development. In short, iterative development is a process that aligns well with promoting specific points at which actionable steps can be taken to mitigate against risks, promoting a safer kind of innovation for working with vulnerable communities. 

So, what do these learnings mean for OpenUp in practice? An example of how these kinds of methods have changed our approach has been our recent refresh of our Codebridge Youth programme. Whilst we restructure our approach to outreach for enhancing youth political participation through data, digital literacy and innovation, we are centring strongly ideas around the peculiar risks that exist when engaging communities of young people. How do you encourage effective use of social media, whilst guarding against cyberbullying? How do encourage digital participation, whilst teaching privacy-preserving practices? We believe the centring of these risks is an important way of preventing extractive practices that often mar community engagement. 

OpenUp appreciates the innate difficulty that comes with strategic decision-making in contexts of complexity, deeply enhanced by extreme inequalities. However, building processes that actively and practically implement reflection points and risk considerations is one way in which we try to innovate thoughtfully. We believe in building with heart, but also in building with forethought. Good intentions are not enough: values that forefront care for your communities can not just be in stated values; they must be embedded-  and designed for - in your processes as well.

Over the next few years, OpenUp will be further developing our Working Papers, and continue innovating our methods. Stay up to date with work here

Share this post:
Email icon

How to innovate with AI and technology in a risk-conscious way

Law school makes you fairly conscious of risk-based thinking; always encouraging you to give balanced consideration to how different paths may create different impacts for people you are advising. Of all the skills to bring over into the innovation field, I feel like this has been one of the better ones to come with me. 

Last year, in our first Working Paper I, OpenUp introduced our musings on how human-centred thinking combined with social-impact-centred thinking might be combined to inform considered design for positive social change. As a continuation of some of that thinking, OpenUp has also explored how to embed within our thinking and design processes appreciation of the potential negative risks and externalities that might arise from how we innovate. 

Risk-based thinking is often used in organisational design. You can see it in action, for instance, in OpenUp’s consideration of funding and financing options for civic tech entities. But it is not just an important idea from constraining business practice, but for affecting design practice as well. In a context where we are actively attempting to influence change, how do we help ensure this change is good?

Some of my favourite work on thinking about AI and impact has been done by lawyers (we’ll do another blog on cognitive bias later in our series). In 2017, many moons before the hype cycle and marketing teams began yelling about ‘existential AI threats’, work in the regulation of AI was considering how to categorise and consider risks and harms. Petit for instance provided the following “Typology and Examples of Externalities” for AI and robotics:

The research shown above was looking to try and categorise risks in the context of assigning liability (a favourite task of law). When we think about risk for our purposes, we are far more concerned with using this thinking to prevent risks practically within an innovation environment, with multiple moving parts. In the world of research and policy in AI, the world of “ethics” is the mechanism through which normative concerns are sought to be driven into products. AI research has been calling within this world for different kinds of formal risk assessments to be adopted into product building prior to its development, and after its deployment, to ensure ‘good’ AI in design and outcome. You may have read, for instance, of privacy risk assessments, which laws like the Protection of Personal Information Act or the European Union’s General Data Protection Regulation oblige those processing personal information to perform.

But sectoral risk assessments like that are largely merely about compliance, and are quite narrow. Continuing from research of Crawford and Calo, OpenUp agrees that innovation should seek to embed values within the developing organisation and project, and can benefit too from conducting thought experiments to plan scenarios (you’ll see in Working Paper I in fact how we put in practice some “problem statements” in that vein to assist in our problem idenitifcation). However, we still wanted to find a way to embed consideration of risk which can the embrace the social, economic and political complexity in which we actually launch our innovations in varied kinds of risk assessment:

“A social-systems analysis needs to draw on philosophy, law, sociology, anthropology and science-and-technology studies, among other disciplines. It must also turn to studies of how social, political and cultural values affect and are affected by technological change and scientific research. Only by asking broader questions about the impacts of AI can we generate a more holistic and integrated understanding than that obtained by analysing aspects of AI in silos such as computer science or criminology”.

So how might we implement this world of risk in practice? One of the primary ways is to specifically adapt your existing tools to try and account for these concerns. Below is Open’s project diagram:

OpenUp's Design Process, (c) 2023

In green you will see we have highlighted the different kinds of risks that can be focused on at specific design points. Impacts include the positive and negative externalities of what you do, and risks merely refer to the negative ones. In unpacking risks, there are of course risks that are specific to projects; for instance, many non-governmental organisations will be familiar with trying to outline these project-specific risks in their proposal (and outlining mitigation plans) because of the question structure of many funding proposals.

But particularly when we work with beneficiaries (as users), we have to think how both our project implementation might incur risks, and how the introduction of our innovation itself might also introduce risk. For example, how might implementation of our project present risks to users just through participation: would their attendance in our user workshops be to their detriment in any way? Looking at our innovation intervention, how might using our product present risks to a user's privacy? This is particularly pertinent in the introduction of adapted technologies with AI-as-a-service or software-as-a-service. 

Turning to context, in our Working Paper I we emphasised how an important aspect of our work was appreciating that design in the context of social change should not just be cognisant of individual users, but the communities in which these users exist and exert their agency. When we work on context, an emphasis on community social dynamics, and how these communities fit within broader political, social and economic structures, can not only help to create better design - but needs to be reflected on when determining risks. For example, how might working with a community on political participation be fractious in a context where there are very low levels of community-government trust?

Importantly too, as a final step through our design process, reflecting on impact (and risks) within our points of building and evaluating helps us to consider risks and then introduce strategies to mitigate those risks in the next phase of development. In short, iterative development is a process that aligns well with promoting specific points at which actionable steps can be taken to mitigate against risks, promoting a safer kind of innovation for working with vulnerable communities. 

So, what do these learnings mean for OpenUp in practice? An example of how these kinds of methods have changed our approach has been our recent refresh of our Codebridge Youth programme. Whilst we restructure our approach to outreach for enhancing youth political participation through data, digital literacy and innovation, we are centring strongly ideas around the peculiar risks that exist when engaging communities of young people. How do you encourage effective use of social media, whilst guarding against cyberbullying? How do encourage digital participation, whilst teaching privacy-preserving practices? We believe the centring of these risks is an important way of preventing extractive practices that often mar community engagement. 

OpenUp appreciates the innate difficulty that comes with strategic decision-making in contexts of complexity, deeply enhanced by extreme inequalities. However, building processes that actively and practically implement reflection points and risk considerations is one way in which we try to innovate thoughtfully. We believe in building with heart, but also in building with forethought. Good intentions are not enough: values that forefront care for your communities can not just be in stated values; they must be embedded-  and designed for - in your processes as well.

Over the next few years, OpenUp will be further developing our Working Papers, and continue innovating our methods. Stay up to date with work here