
Sign up to save your podcasts
Or
In part two of my interview with Delft University of Technology’s assistant professor of cyber risk, Dr. Wolter Pieters, we continue our discussion on transparency versus secrecy in security.
We also cover ways organizations can present themselves as trustworthy. How? Be very clear about managing expectations. Declare your principles so that end users can trust that you’ll be executing by the principles you advocate. Lastly, have a plan for know what to do when something goes wrong.
And of course there’s a caveat, Wolter reminds us that there’s also a very important place in this world for ethical hackers. Why? Not all security issues can be solved during the design stage.
Privacy, but then again, I think privacy is a bit overrated. This is really about power balance. It's because everything we do in security will give some people access and exclude other people, and that's a very fundamental thing. It's basically about power balance that is through security we embed into technology. And that is what fundamentally interests me in relation to security and ethics.
So, algorithms that were secrets, trade secrets, etc. being broken very moments the algorithm became known. So, in that sense there I think most researchers would agree this is good practice. On the other hand it's seems that there's also a certain limit to what we want to be transparent there. Both in terms of security controls, we're not giving away every single thing governments do in terms of security online. So, there is some level of security by obscurity there and more generally to what extent is transparency a good thing. This again ties in with who is a threat. I mean, we have the whole WikiLeaks endeavor and some people will say, "Well, this is great. The government shouldn't be keeping all that stuff secret." So, it's great for trust that this is now all out in the open. On the other hand, you could argue all this and this is actually a threat to trust in the government. So, this form of transparency would be very bad for trust.
So, there's clearly a tension there. Some level of transparency may help people trust in the protections embedded in the technology and in the actors that use those technologies online. But on the other hand, if there's too much transparency all the nitty-gritty details may actually decrease trust. You see this all over the place. We've seen it through with the electronic voting as well. If you provide some level of explanation on how certain technologies are being secured, that may help. If you provide too much detail people won't understand it and it will only increase distrust. There is a kind of golden middle there in terms of how much explanation you should give to make people trust in certain forms of security encryption, etc. And again, in the end people will have to rely on experts because physical forms of security, physical ballot boxes, it's possible to explain and how these work and how they are being secured with digital that becomes much more complicated and for most people, they will have to trust the judgment of experts that these forms of security are actually good if the experts believe so.
Not only that it's possible to change your privacy settings, to regulate the access that other use of the social networking servers have to your data, but at the same time you need to be crystal clear about how you as a social network operator are using the kind of data. Because sometimes I get the big internet companies are offering all kinds of privacy settings which give people the impression that they can do a lot in terms of their privacy but, yes, this is true for the inter user data access but the provider still sees everything. This seems to be a way of framing privacy in terms of inter user data access. Whereas, I think it's much more fundamental what these companies can do with all the data they gather for all their use and what that means in terms of their power and the position that they get in this whole area of cyberspace this whole arena.
So, managing expectations, I mean, there's all kinds of different standpoints also based on different ethical theories, based on different political points of view that you could take in this space. If you want to behave ethically then make sure you list your principles, you list what you do in terms of security and privacy to adhere to those principles and make sure that people can actually trust that this is also what you do in practice. And also make sure that you know exactly what you're going to do in case something goes wrong anyway. We've seen too many breaches where the responses by the companies were not quite up to standards in terms of delaying the announcement of the breach or it's crucial to not only do some prevention in terms of security and privacy but also know what you're going to do in case something goes wrong.
The same for elections, there is no neutral space from which people can cast their vote without being influenced and we've seen in recent elections that actually technology is playing more and more of a role in how people perceive political parties and how to make decisions in terms of voting. So, it's inevitable that technology companies have a role in those elections and that's also what they need to acknowledge.
And then of course, and I think this is a big question that needs to be asked, "Can we prevent the situation in which the power of certain online stakeholders whether those are companies or are there for a nation state or whatever. Can we prevent a situation in which they get so much power that they are able to influence our governments, either through elections or through other means?" That's a situation that we really don't want to be in and I'm not pretending that I have a crystal clear answers there but this is something that at least we should consider as a possible scenario.
And then there's all these doomsday scenarios with Cyber Pearl Harbor and I'm not sure whether these doomsday scenarios are the best way to think about this but we should also not be naive and think that all of this will blow over because maybe indeed we have already been giving away too much power in a sense. So, what we should do is fundamentally rethink the way we think about security and privacy from, "Oh, damn, my photos are I don't know whatever, in the hands of whoever." That's not the point. It's about the scale in which the certain actors either get their hands on data or lots of individuals are able to influence lots of individuals. So, again scale comes in there. It's not about our individual privacy, it's about the power that these stakeholders get by having access to the data over by being able to influence lots and lots of people and that's what the debate needs to be about.
Wolter Pieters: Yeah. I think that's an issue but if that's going to be happening, if people are afraid to play this role because legislation doesn't protect them enough, then maybe we need to do something about that. If we don't have people that point us to essential weaknesses in security, then what will happen is that those issues will be kept secret and that they will be misused in ways that we don't know about and I think that's much worse situation to be in.
Want to join us live? Save a seat here: https://www.varonis.com/state-of-cybercrime
More from Varonis ⬇️
Visit our website: https://www.varonis.com
LinkedIn: https://www.linkedin.com/company/varonis
X/Twitter: https://twitter.com/varonis
Instagram: https://www.instagram.com/varonislife/
5
137137 ratings
In part two of my interview with Delft University of Technology’s assistant professor of cyber risk, Dr. Wolter Pieters, we continue our discussion on transparency versus secrecy in security.
We also cover ways organizations can present themselves as trustworthy. How? Be very clear about managing expectations. Declare your principles so that end users can trust that you’ll be executing by the principles you advocate. Lastly, have a plan for know what to do when something goes wrong.
And of course there’s a caveat, Wolter reminds us that there’s also a very important place in this world for ethical hackers. Why? Not all security issues can be solved during the design stage.
Privacy, but then again, I think privacy is a bit overrated. This is really about power balance. It's because everything we do in security will give some people access and exclude other people, and that's a very fundamental thing. It's basically about power balance that is through security we embed into technology. And that is what fundamentally interests me in relation to security and ethics.
So, algorithms that were secrets, trade secrets, etc. being broken very moments the algorithm became known. So, in that sense there I think most researchers would agree this is good practice. On the other hand it's seems that there's also a certain limit to what we want to be transparent there. Both in terms of security controls, we're not giving away every single thing governments do in terms of security online. So, there is some level of security by obscurity there and more generally to what extent is transparency a good thing. This again ties in with who is a threat. I mean, we have the whole WikiLeaks endeavor and some people will say, "Well, this is great. The government shouldn't be keeping all that stuff secret." So, it's great for trust that this is now all out in the open. On the other hand, you could argue all this and this is actually a threat to trust in the government. So, this form of transparency would be very bad for trust.
So, there's clearly a tension there. Some level of transparency may help people trust in the protections embedded in the technology and in the actors that use those technologies online. But on the other hand, if there's too much transparency all the nitty-gritty details may actually decrease trust. You see this all over the place. We've seen it through with the electronic voting as well. If you provide some level of explanation on how certain technologies are being secured, that may help. If you provide too much detail people won't understand it and it will only increase distrust. There is a kind of golden middle there in terms of how much explanation you should give to make people trust in certain forms of security encryption, etc. And again, in the end people will have to rely on experts because physical forms of security, physical ballot boxes, it's possible to explain and how these work and how they are being secured with digital that becomes much more complicated and for most people, they will have to trust the judgment of experts that these forms of security are actually good if the experts believe so.
Not only that it's possible to change your privacy settings, to regulate the access that other use of the social networking servers have to your data, but at the same time you need to be crystal clear about how you as a social network operator are using the kind of data. Because sometimes I get the big internet companies are offering all kinds of privacy settings which give people the impression that they can do a lot in terms of their privacy but, yes, this is true for the inter user data access but the provider still sees everything. This seems to be a way of framing privacy in terms of inter user data access. Whereas, I think it's much more fundamental what these companies can do with all the data they gather for all their use and what that means in terms of their power and the position that they get in this whole area of cyberspace this whole arena.
So, managing expectations, I mean, there's all kinds of different standpoints also based on different ethical theories, based on different political points of view that you could take in this space. If you want to behave ethically then make sure you list your principles, you list what you do in terms of security and privacy to adhere to those principles and make sure that people can actually trust that this is also what you do in practice. And also make sure that you know exactly what you're going to do in case something goes wrong anyway. We've seen too many breaches where the responses by the companies were not quite up to standards in terms of delaying the announcement of the breach or it's crucial to not only do some prevention in terms of security and privacy but also know what you're going to do in case something goes wrong.
The same for elections, there is no neutral space from which people can cast their vote without being influenced and we've seen in recent elections that actually technology is playing more and more of a role in how people perceive political parties and how to make decisions in terms of voting. So, it's inevitable that technology companies have a role in those elections and that's also what they need to acknowledge.
And then of course, and I think this is a big question that needs to be asked, "Can we prevent the situation in which the power of certain online stakeholders whether those are companies or are there for a nation state or whatever. Can we prevent a situation in which they get so much power that they are able to influence our governments, either through elections or through other means?" That's a situation that we really don't want to be in and I'm not pretending that I have a crystal clear answers there but this is something that at least we should consider as a possible scenario.
And then there's all these doomsday scenarios with Cyber Pearl Harbor and I'm not sure whether these doomsday scenarios are the best way to think about this but we should also not be naive and think that all of this will blow over because maybe indeed we have already been giving away too much power in a sense. So, what we should do is fundamentally rethink the way we think about security and privacy from, "Oh, damn, my photos are I don't know whatever, in the hands of whoever." That's not the point. It's about the scale in which the certain actors either get their hands on data or lots of individuals are able to influence lots of individuals. So, again scale comes in there. It's not about our individual privacy, it's about the power that these stakeholders get by having access to the data over by being able to influence lots and lots of people and that's what the debate needs to be about.
Wolter Pieters: Yeah. I think that's an issue but if that's going to be happening, if people are afraid to play this role because legislation doesn't protect them enough, then maybe we need to do something about that. If we don't have people that point us to essential weaknesses in security, then what will happen is that those issues will be kept secret and that they will be misused in ways that we don't know about and I think that's much worse situation to be in.
Want to join us live? Save a seat here: https://www.varonis.com/state-of-cybercrime
More from Varonis ⬇️
Visit our website: https://www.varonis.com
LinkedIn: https://www.linkedin.com/company/varonis
X/Twitter: https://twitter.com/varonis
Instagram: https://www.instagram.com/varonislife/
4,334 Listeners
181 Listeners
927 Listeners
7,865 Listeners
129 Listeners