I want to talk about how we are setting the standards for our digital world and society. Now that our whole world and everything in it seems to have become a business case for for-profit digitalisation, we are purposefully driven to use off the shelve, inflexible, data-absorbing and exploitative products marketed to us by big companies. Can standards help change that? And how do we do that?
Here at WHY we are all curious about everything technical and, on average, also pretty adept at working with digital nails and hammers. But even so, these big companies drive out many grassroots alternatives, making it harder and harder to set your own course.
And not just that. It is not everybody’s forte to ‘do digitalisation’. If not, you still deserve a decent quality product. Maybe you should not be tricked into handing over all your data to marketing companies, including pretty private information. And with promises of big profits come the investors, the people that don’t care about the product but instead focus on profit margins and return on investment - with varying levels of appreciation of trivialities like workers rights or environmental protection. (/s)
Europe, and the Netherlands, has something to protect though, like the right not to be surveilled by big tech and employers, to have a high quality education system and medical devices that are catered to finding ailments rather than to selling medical interventions for instance. It also has a responsibility to ensure people outside of Europe are not abused and exploited for the products we use to enlighten our lives and avoid further climate catastrophe because we all want to make our own individual generated film of Will Smith slurping spaghetti.
In the past decade(s), the EU has worked with the New Legislative Framework, the idea that standards organisations develop the ‘how to comply’ options that industry can use to make a product that is presumed to be in accordance with the European rules. This takes time and in AI we see the backlash of trying to do this for an area where there really isn’t a ‘state of the art’ yet. Much of it is still just experiments, and rules need to be in place before evaluation of experiments has even started. And hardly anyone is daring enough to address the question of sustainability with regards to the enormous demands of generative AI models, and the words copyrights and creator are nowhere to be found.
I actually have little answers at all, iI wish I knew. I do however have a desire to ask: why are we not talking about this? Why are we not demanding the return on investment results of extremely demanding and expensive tools for our society? And why are we avoiding to remind the very few enormous companies (and their owners) making all the money that they need to pay for the costs of their resources they are ruthlessly seizing from our public domain? Why do we leave this depletion to individuals to solve - to the individual creator to go to court, to the tiny village where the water is being depleted, where is the support of those we need to help to stand up to this exhaustion? Can we please ask ourselves where ALL THIS will be addressed before it is too late?
Licensed to the public under https://creativecommons.org/licenses/by/4.0/
about this event: https://program.why2025.org/why2025/talk/TFRMSB/