
Sign up to save your podcasts
Or


Integrating artificial intelligence into your workflow requires you to not only evaluate where and how it can add value, but also, what ethical considerations arise with each new implementation and the policies you might need to put in place as you go. So far I have found that AI can offer a lot of value in collaborative processes, but there are a number of areas where it is easy to violate trust in ways that will harm adoption in the future.
In this post, I wanted to document some of the considerations that have come up so far, and the beginnings of a framework for approaching your own policies. I’ve boiled it down to a few design principles around five key areas: risk, power, privacy, ownership and value.
Goal17 is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Risk
Principle: Learn with low stakes
There are a lot of risks, both real and imagined, in adopting AI, but they can be difficult to identify until you start using it. There could be security exposure, legal issues, hallucinations or reputational damage, but sometimes those risks can be difficult to fully understand in advance. Each platform has its own risks, but so does every use case. Experimenting and prototyping with low risk data, low risk stakeholders in a low risk environment can help you shape a better understanding of what the possibilities are, and also a more realistic picture of the risks. Trying to design a perfect implementation up front makes it very difficult to understand the full picture; I’ve been playing with hardware, software, process and context in “safe” projects in order to get a better understanding of the pitfalls.
I now gravitate to hardware with failsafes, clear articulations of data retention policies, AI platforms with clears Terms of Service and processing of inputs rather than outputs as a result of this approach.
Power
Principle: Level, don’t amplify, power imbalances
It’s easy to imagine uses for AI that allow you to centralize a lot of control and to optimize, automate or monitor a wider and wider range of inputs. I believe, however, that the AI use cases that will get the most traction are the ones that rebalance power, as opposed to exacerbating existing imbalances. Imagine a call centre, for example. One approach that leans on an existing imbalance would be to deploy chatbots and voice agents that allow the call centre to have fewer staff, and can triage callers before they speak to an agent. If the optimization is purely for the benefit of the company, it will most likely result in even more frustration from callers. An approach that addresses the imbalance would be to have an AI that works as an agent on behalf of the caller, to minimize their time and to negotiate a solution before reaching back out to them.
In a collaborative process, AI can be used to provide more channels for more input and engagement from more people in a meaningful way. Use it to increase, not replace, engagement.
Privacy
Principle: Respect autonomy, earn trust and don’t be creepy
In the workplace, and especially in collaborative settings, it is now possible to process so many inputs that it is very easy to move from “capture” to “surveillance”. I believe that over time, processes that don’t respect the rights of the people who participate will struggle to get buy in, and even those that do will be biased by the behaviour of individuals that know they are being surveilled. Once trust is lost, it is very difficult to get it back, and, further, when this technology is being used in environments where there are low levels of trust to begin with, extra steps will have to be made to get buy in. If you are planning an approach that takes away the autonomy of users or spies on them in a way that wouldn’t otherwise be socially acceptable (would you do this to your family? Friends?) it is very likely to backfire.
While I now use microphones in breakout sessions, for example, I am crafting a clear privacy policy around retention and use of any recordings, and am iterating the system to have no human-in-the-loop so that comments are not traceable to individuals (Chatham House Rule).
Ownership
Principle: Ownership of inputs should correlate to ownership of outputs
Aggregating data to build a new value proposition can lead to the same issues that AI companies have been facing with copyright holders: they are selling the outputs of a model that was created using other people’s inputs. If you are planning on aggregating data, or profiting from the output of aggregation, you should do this in collaboration with those who create the inputs. This is not only the right thing to do, this is an evolving area of law, so protects you from unforeseen exposure in the future.
Value
Principle: Build generative, not extractive, value propositions
AI can be used to extract benefit from others, or it can be used to generate value for everyone involved. While extracting value might be profitable, I think longer term value is to be had with generative value propositions. In a collaborative setting, you might use AI to generate interaction data over time that has value, or build “lock-in” with groups because you hold their data, but I think this will be met with more and more resistance as people become more savvy with the technology. Using AI to build supports for groups that can speed their work and enrich their experience, I think, will get a lot more adoption over time.
In Conclusion
If there was a final principle I would use, it would be this: be willing to show your work. I think that transparency is the best test across the entire workflow. If you’re not comfortable sharing who benefits and how, what technology you’re using, how you’re managing the data, what the data can be used for and how you’re thinking about all the stakeholders in the process, then that should be a gut check that you should make some changes in your approach.
By Research and Analysis by Aaron WilliamsonIntegrating artificial intelligence into your workflow requires you to not only evaluate where and how it can add value, but also, what ethical considerations arise with each new implementation and the policies you might need to put in place as you go. So far I have found that AI can offer a lot of value in collaborative processes, but there are a number of areas where it is easy to violate trust in ways that will harm adoption in the future.
In this post, I wanted to document some of the considerations that have come up so far, and the beginnings of a framework for approaching your own policies. I’ve boiled it down to a few design principles around five key areas: risk, power, privacy, ownership and value.
Goal17 is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Risk
Principle: Learn with low stakes
There are a lot of risks, both real and imagined, in adopting AI, but they can be difficult to identify until you start using it. There could be security exposure, legal issues, hallucinations or reputational damage, but sometimes those risks can be difficult to fully understand in advance. Each platform has its own risks, but so does every use case. Experimenting and prototyping with low risk data, low risk stakeholders in a low risk environment can help you shape a better understanding of what the possibilities are, and also a more realistic picture of the risks. Trying to design a perfect implementation up front makes it very difficult to understand the full picture; I’ve been playing with hardware, software, process and context in “safe” projects in order to get a better understanding of the pitfalls.
I now gravitate to hardware with failsafes, clear articulations of data retention policies, AI platforms with clears Terms of Service and processing of inputs rather than outputs as a result of this approach.
Power
Principle: Level, don’t amplify, power imbalances
It’s easy to imagine uses for AI that allow you to centralize a lot of control and to optimize, automate or monitor a wider and wider range of inputs. I believe, however, that the AI use cases that will get the most traction are the ones that rebalance power, as opposed to exacerbating existing imbalances. Imagine a call centre, for example. One approach that leans on an existing imbalance would be to deploy chatbots and voice agents that allow the call centre to have fewer staff, and can triage callers before they speak to an agent. If the optimization is purely for the benefit of the company, it will most likely result in even more frustration from callers. An approach that addresses the imbalance would be to have an AI that works as an agent on behalf of the caller, to minimize their time and to negotiate a solution before reaching back out to them.
In a collaborative process, AI can be used to provide more channels for more input and engagement from more people in a meaningful way. Use it to increase, not replace, engagement.
Privacy
Principle: Respect autonomy, earn trust and don’t be creepy
In the workplace, and especially in collaborative settings, it is now possible to process so many inputs that it is very easy to move from “capture” to “surveillance”. I believe that over time, processes that don’t respect the rights of the people who participate will struggle to get buy in, and even those that do will be biased by the behaviour of individuals that know they are being surveilled. Once trust is lost, it is very difficult to get it back, and, further, when this technology is being used in environments where there are low levels of trust to begin with, extra steps will have to be made to get buy in. If you are planning an approach that takes away the autonomy of users or spies on them in a way that wouldn’t otherwise be socially acceptable (would you do this to your family? Friends?) it is very likely to backfire.
While I now use microphones in breakout sessions, for example, I am crafting a clear privacy policy around retention and use of any recordings, and am iterating the system to have no human-in-the-loop so that comments are not traceable to individuals (Chatham House Rule).
Ownership
Principle: Ownership of inputs should correlate to ownership of outputs
Aggregating data to build a new value proposition can lead to the same issues that AI companies have been facing with copyright holders: they are selling the outputs of a model that was created using other people’s inputs. If you are planning on aggregating data, or profiting from the output of aggregation, you should do this in collaboration with those who create the inputs. This is not only the right thing to do, this is an evolving area of law, so protects you from unforeseen exposure in the future.
Value
Principle: Build generative, not extractive, value propositions
AI can be used to extract benefit from others, or it can be used to generate value for everyone involved. While extracting value might be profitable, I think longer term value is to be had with generative value propositions. In a collaborative setting, you might use AI to generate interaction data over time that has value, or build “lock-in” with groups because you hold their data, but I think this will be met with more and more resistance as people become more savvy with the technology. Using AI to build supports for groups that can speed their work and enrich their experience, I think, will get a lot more adoption over time.
In Conclusion
If there was a final principle I would use, it would be this: be willing to show your work. I think that transparency is the best test across the entire workflow. If you’re not comfortable sharing who benefits and how, what technology you’re using, how you’re managing the data, what the data can be used for and how you’re thinking about all the stakeholders in the process, then that should be a gut check that you should make some changes in your approach.