
Sign up to save your podcasts
Or
This week on BSDNow, we’ve got all sorts of post-holiday goodies to share. New OpenSSL APIs, Dtrace, OpenBSD
As many of you will already be aware, the OpenSSL 1.1.0 release intentionally introduced significant API changes from the previous release. In summary, a large number of data structures that were previously publically visible have been made opaque, with accessor functions being added in order to get and set some of the fields within these now opaque structs. It is worth noting that the use of opaque data structures is generally beneficial for libraries, since changes can be made to these data structures without breaking the ABI. As such, the overall direction of these changes is largely reasonable.
However, while API change is generally necessary for progression, in this case it would appear that there is NO transition plan and a complete disregard for the impact that these changes would have on the overall open source ecosystem.
So far it seems that the only approach is to place the migration burden onto each and every software project that uses OpenSSL, pushing significant code changes to each project that migrates to OpenSSL 1.1, while maintaining compatibility with the previous API. This is forcing each project to provide their own backwards compatibility shims, which is practically guaranteeing that there will be a proliferation of variable quality implementations; it is almost a certainty that some of these will contain bugs, potentially introducing security issues or memory leaks.
Due to a number of factors, software projects that make use of OpenSSL cannot simply migrate to the 1.1 API and drop support for the 1.0 API - in most cases they will need to continue to support both. Firstly, I am not aware of any platform that has shipped a production release with OpenSSL 1.1 - any software that supported OpenSSL 1.1 only, would effectively be unusable on every platform for the time being. Secondly, the OpenSSL 1.0.2 release is supported until the 31st of December 2019, while OpenSSL 1.1.0 is only supported until the 31st of August 2018 - any LTS style release is clearly going to consider shipping with 1.0.2 as a result.
Platforms that are attempting to ship with OpenSSL 1.1 are already encountering significant challenges - for example, Debian currently has 257 packages (out of 518) that do not build against OpenSSL 1.1. There are also hidden gotchas for situations where different libraries are linked against different OpenSSL versions and then share OpenSSL data structures between them - many of these problems will be difficult to detect since they only fail at runtime.
Another similar way to create a backchannel but without transmitting anything is to introduce delays in the receiver and measure throughput as observed by the sender. All we need is a protocol with transmission control. Hmmm. Actually, it’s easier (and more reliable) to code this up using a plain pipe, but the same principle applies to networked transmissions.
For every digit we want to “send” back, we sleep a few seconds, then drain the pipe. We don’t care about the data, although if this were a video file or an OS update, we could probably do something useful with it.
Continuously fill the pipe with junk data. If (when) we block, calculate the difference between before and after. This is a our secret backchannel data. (The reader and writer use different buffer sizes because on OpenBSD at least, a writer will stay blocked even after a read depending on the space that opens up. Even simple demos have real world considerations.)
In this simple example, the secret data (argv) is shared by the processes, but we can see that the writer isn’t printing them from its own address space. Nevertheless, it works.
Time to add random delays and buffering to firewalls? Probably not.
I had been procrastinating making the family holiday card. It was a combination of having a lot on my plate and dreading the formulation of our annual note recapping the year; there were some great moments, but I’m glad I don’t have to do 2016 again. It was just before midnight and either I’d make the card that night or leave an empty space on our friends’ refrigerators.
I’m not the first person to hit this. The problem seems to have existed since CS6 was released in 2016. None of the solutions were working for me, and — inspired by Sara Mauskopf’s excellent post — I was rapidly running out of the time bounds for the project. Enough; I’d just DTrace it.
A colleague scoffed the other day, “I mean, how often do you actually use DTrace?” In his mind DTrace was for big systems, critical system, when dollars and lives were at stake. My reply: I use DTrace every day. I can’t imagine developing software without DTrace, and I use it when my laptop (not infrequently) does something inexplicable (I’m forever grateful to the Apple team that ported it to Mac OS X)
Illustrator is failing on setrlimit(2) and blowing up as result. Let’s confirm that it is in fact returning -1:$ sudo dtrace -n 'syscall::setrlimit:return/execname == "Adobe Illustrato"/{ printf("%d %d", arg1, errno); }'
There it is. And setrlimit(2) is failing with errno 1 which is EPERM (value too high for non-root user). I already tuned up the files limit pretty high. Let’s confirm that it is in fact setting the files limit and check the value to which it’s being set. To write this script I looked at the documentation for setrlimit(2) (hooray for man pages!) to determine that the position of the resource parameter (arg0) and the type of the value parameter (struct rlimit). I needed the DTrace copyin() subroutine to grab the structure from the process’s address space:
dtrace: description 'syscall::setrlimit:entry' matched 1 probe
The quickest solution was to use DTrace again to whack a smaller number into that struct rlimit. Easy:
dtrace: description 'syscall::setrlimit:entry' matched 1 probe
Oh right. Thank you SIP (System Integrity Protection). This is a new laptop (at least a new motherboard due to some bizarre issue) which probably contributed to Illustrator not working when once it did. Because it’s new I haven’t yet disabled the part of SIP that prevents you from using DTrace on the kernel or in destructive mode (e.g. copyout()). It’s easy enough to disable, but I’m reboot-phobic — I hate having to restart my terminals — so I went to plan B: lldb
Next I just did a process detach and got on with making that holiday card…
DTrace was designed for solving hard problems on critical systems, but the need to understand how systems behave exists in development and on consumer systems. Just because you didn’t write a program doesn’t mean you can’t fix it.
He starts off with a look at physical security. He begins by listing your options:
Out of those options, Brian mentions that he uses disk encryption and yubi-key for all his secure network systems.
Next up is network segmentation, in this case the first thing to do is change your admin password for any ISP supplied modem
For added security, naturally he firewalls the router by plugging in the LAN port to a OpenBSD box which does the 2nd layer of firewall / router protection.
What about privacy and browsing? Here’s some more of his tips:
I use Unbound as my DNS resolver on my local network (with all UDP port 53 traffic redirected to it by pf so I don’t have to configure anything on the clients) and then forward the traffic to DNSCrypt Proxy, caching the results in Unbound. I notice ZERO performance penalty for this and it greatly enhances privacy. This combination of Unbound and DNSCrypt Proxy works very well together. You can even have redundancy by having multiple upstream resolvers running on different ports (basically run the DNSCrypt Proxy daemon multiple times pointing to different public resolvers).
I also use Firefox exclusively for my web browsing. By leveraging the tips on this page, you can lock it down to do a great job of privacy protection. The fact that your laptop’s battery drain rate can be used to fingerprint your browser completely trips me out but hey – that’s the world we live in.’
I recently decided I would try to live a cloud-free life and I’ll give you a bit of a synopsis on it. I discovered a wonderful Open Source project called FreeNAS. What this little gem does is allow you to install a FreeBSD/zfs file server appliance on amd64 hardware and have a slick administrative web interface for managing it. I picked up a nice SuperMicro motherboard and chassis that has 4 hot swap drive bays (and two internal bays that I used to mirror the boot volume on) and am rocking the zfs lifestyle! (Thanks Alan Jude!)
One of the nicest features of the FreeNAS is that it provides the ability to leverage the FreeBSD jail functionality in an easy to use way. It also has plugins but the security on those is a bit sketchy (old versions of libraries, etc.) so I decided to roll my own. I created two jails – one to run OwnCloud (yeah, I know about NextCloud and might switch at some point) and the other to run a full SMTP/IMAP email server stack. I used Lets Encrypt to generate the SSL certificates and made sure I hit an A on SSLLabs before I did anything else.
Enter TarSnap – a company that advertises itself as “Online Backups for the Truly Paranoid”. It brings a tear to my eye – a kindred spirit! :-) Thanks again to Alan Jude and Kris Moore from the BSD Now podcast for turning me onto this company. It has a very easy command syntax (yes, it isn’t a GUI tool – suck it up buttercup, you wanted to learn the shell didn’t you?) and even allows you to compile the thing from source if you want to.”
DTrace is another vital feature for anyone who has had to deal with production issues and has been in FreeBSD since version 9. As of FreeBSD 11 the operating system now contains some great work by Fedor Indutny so you can profile node applications and create flamegraphs of node.js processes without any additional runtime flags or restarting of processes.
In order to configure your FreeBSD instance to utilize this feature make the following changes to the configuration of the server.
Also check out Brendan Gregg’s ACM Queue Article “The Flame Graph: This visualization of software execution is a new necessity for performance profiling and debugging”
A lot of work to get SSHGuard working with new log sources (journalctl, macOS log) and backends (firewalld, ipset) has happened in 2.0. The new version also uses a configuration file.
Most importantly, SSHGuard has been split into several processes piped into one another (sshg-logmon | sshg-parser | sshg-blocker | sshg-fw). sshg-parser can run with capsicum(4) and pledge(2). sshg-blocker can be sandboxed in its default configuration (without pid file, whitelist, blacklisting) and has not been tested sandboxed in other configurations.
pjd’s 2007 paper from AsiaBSDCon: “Porting the ZFS file system to the FreeBSD operating system”
A Message From the FreeBSD Foundation
Remembering Roger Faulkner, Unix Champion and A few HN comments (including Bryan Cantrill)
4.9
8989 ratings
This week on BSDNow, we’ve got all sorts of post-holiday goodies to share. New OpenSSL APIs, Dtrace, OpenBSD
As many of you will already be aware, the OpenSSL 1.1.0 release intentionally introduced significant API changes from the previous release. In summary, a large number of data structures that were previously publically visible have been made opaque, with accessor functions being added in order to get and set some of the fields within these now opaque structs. It is worth noting that the use of opaque data structures is generally beneficial for libraries, since changes can be made to these data structures without breaking the ABI. As such, the overall direction of these changes is largely reasonable.
However, while API change is generally necessary for progression, in this case it would appear that there is NO transition plan and a complete disregard for the impact that these changes would have on the overall open source ecosystem.
So far it seems that the only approach is to place the migration burden onto each and every software project that uses OpenSSL, pushing significant code changes to each project that migrates to OpenSSL 1.1, while maintaining compatibility with the previous API. This is forcing each project to provide their own backwards compatibility shims, which is practically guaranteeing that there will be a proliferation of variable quality implementations; it is almost a certainty that some of these will contain bugs, potentially introducing security issues or memory leaks.
Due to a number of factors, software projects that make use of OpenSSL cannot simply migrate to the 1.1 API and drop support for the 1.0 API - in most cases they will need to continue to support both. Firstly, I am not aware of any platform that has shipped a production release with OpenSSL 1.1 - any software that supported OpenSSL 1.1 only, would effectively be unusable on every platform for the time being. Secondly, the OpenSSL 1.0.2 release is supported until the 31st of December 2019, while OpenSSL 1.1.0 is only supported until the 31st of August 2018 - any LTS style release is clearly going to consider shipping with 1.0.2 as a result.
Platforms that are attempting to ship with OpenSSL 1.1 are already encountering significant challenges - for example, Debian currently has 257 packages (out of 518) that do not build against OpenSSL 1.1. There are also hidden gotchas for situations where different libraries are linked against different OpenSSL versions and then share OpenSSL data structures between them - many of these problems will be difficult to detect since they only fail at runtime.
Another similar way to create a backchannel but without transmitting anything is to introduce delays in the receiver and measure throughput as observed by the sender. All we need is a protocol with transmission control. Hmmm. Actually, it’s easier (and more reliable) to code this up using a plain pipe, but the same principle applies to networked transmissions.
For every digit we want to “send” back, we sleep a few seconds, then drain the pipe. We don’t care about the data, although if this were a video file or an OS update, we could probably do something useful with it.
Continuously fill the pipe with junk data. If (when) we block, calculate the difference between before and after. This is a our secret backchannel data. (The reader and writer use different buffer sizes because on OpenBSD at least, a writer will stay blocked even after a read depending on the space that opens up. Even simple demos have real world considerations.)
In this simple example, the secret data (argv) is shared by the processes, but we can see that the writer isn’t printing them from its own address space. Nevertheless, it works.
Time to add random delays and buffering to firewalls? Probably not.
I had been procrastinating making the family holiday card. It was a combination of having a lot on my plate and dreading the formulation of our annual note recapping the year; there were some great moments, but I’m glad I don’t have to do 2016 again. It was just before midnight and either I’d make the card that night or leave an empty space on our friends’ refrigerators.
I’m not the first person to hit this. The problem seems to have existed since CS6 was released in 2016. None of the solutions were working for me, and — inspired by Sara Mauskopf’s excellent post — I was rapidly running out of the time bounds for the project. Enough; I’d just DTrace it.
A colleague scoffed the other day, “I mean, how often do you actually use DTrace?” In his mind DTrace was for big systems, critical system, when dollars and lives were at stake. My reply: I use DTrace every day. I can’t imagine developing software without DTrace, and I use it when my laptop (not infrequently) does something inexplicable (I’m forever grateful to the Apple team that ported it to Mac OS X)
Illustrator is failing on setrlimit(2) and blowing up as result. Let’s confirm that it is in fact returning -1:$ sudo dtrace -n 'syscall::setrlimit:return/execname == "Adobe Illustrato"/{ printf("%d %d", arg1, errno); }'
There it is. And setrlimit(2) is failing with errno 1 which is EPERM (value too high for non-root user). I already tuned up the files limit pretty high. Let’s confirm that it is in fact setting the files limit and check the value to which it’s being set. To write this script I looked at the documentation for setrlimit(2) (hooray for man pages!) to determine that the position of the resource parameter (arg0) and the type of the value parameter (struct rlimit). I needed the DTrace copyin() subroutine to grab the structure from the process’s address space:
dtrace: description 'syscall::setrlimit:entry' matched 1 probe
The quickest solution was to use DTrace again to whack a smaller number into that struct rlimit. Easy:
dtrace: description 'syscall::setrlimit:entry' matched 1 probe
Oh right. Thank you SIP (System Integrity Protection). This is a new laptop (at least a new motherboard due to some bizarre issue) which probably contributed to Illustrator not working when once it did. Because it’s new I haven’t yet disabled the part of SIP that prevents you from using DTrace on the kernel or in destructive mode (e.g. copyout()). It’s easy enough to disable, but I’m reboot-phobic — I hate having to restart my terminals — so I went to plan B: lldb
Next I just did a process detach and got on with making that holiday card…
DTrace was designed for solving hard problems on critical systems, but the need to understand how systems behave exists in development and on consumer systems. Just because you didn’t write a program doesn’t mean you can’t fix it.
He starts off with a look at physical security. He begins by listing your options:
Out of those options, Brian mentions that he uses disk encryption and yubi-key for all his secure network systems.
Next up is network segmentation, in this case the first thing to do is change your admin password for any ISP supplied modem
For added security, naturally he firewalls the router by plugging in the LAN port to a OpenBSD box which does the 2nd layer of firewall / router protection.
What about privacy and browsing? Here’s some more of his tips:
I use Unbound as my DNS resolver on my local network (with all UDP port 53 traffic redirected to it by pf so I don’t have to configure anything on the clients) and then forward the traffic to DNSCrypt Proxy, caching the results in Unbound. I notice ZERO performance penalty for this and it greatly enhances privacy. This combination of Unbound and DNSCrypt Proxy works very well together. You can even have redundancy by having multiple upstream resolvers running on different ports (basically run the DNSCrypt Proxy daemon multiple times pointing to different public resolvers).
I also use Firefox exclusively for my web browsing. By leveraging the tips on this page, you can lock it down to do a great job of privacy protection. The fact that your laptop’s battery drain rate can be used to fingerprint your browser completely trips me out but hey – that’s the world we live in.’
I recently decided I would try to live a cloud-free life and I’ll give you a bit of a synopsis on it. I discovered a wonderful Open Source project called FreeNAS. What this little gem does is allow you to install a FreeBSD/zfs file server appliance on amd64 hardware and have a slick administrative web interface for managing it. I picked up a nice SuperMicro motherboard and chassis that has 4 hot swap drive bays (and two internal bays that I used to mirror the boot volume on) and am rocking the zfs lifestyle! (Thanks Alan Jude!)
One of the nicest features of the FreeNAS is that it provides the ability to leverage the FreeBSD jail functionality in an easy to use way. It also has plugins but the security on those is a bit sketchy (old versions of libraries, etc.) so I decided to roll my own. I created two jails – one to run OwnCloud (yeah, I know about NextCloud and might switch at some point) and the other to run a full SMTP/IMAP email server stack. I used Lets Encrypt to generate the SSL certificates and made sure I hit an A on SSLLabs before I did anything else.
Enter TarSnap – a company that advertises itself as “Online Backups for the Truly Paranoid”. It brings a tear to my eye – a kindred spirit! :-) Thanks again to Alan Jude and Kris Moore from the BSD Now podcast for turning me onto this company. It has a very easy command syntax (yes, it isn’t a GUI tool – suck it up buttercup, you wanted to learn the shell didn’t you?) and even allows you to compile the thing from source if you want to.”
DTrace is another vital feature for anyone who has had to deal with production issues and has been in FreeBSD since version 9. As of FreeBSD 11 the operating system now contains some great work by Fedor Indutny so you can profile node applications and create flamegraphs of node.js processes without any additional runtime flags or restarting of processes.
In order to configure your FreeBSD instance to utilize this feature make the following changes to the configuration of the server.
Also check out Brendan Gregg’s ACM Queue Article “The Flame Graph: This visualization of software execution is a new necessity for performance profiling and debugging”
A lot of work to get SSHGuard working with new log sources (journalctl, macOS log) and backends (firewalld, ipset) has happened in 2.0. The new version also uses a configuration file.
Most importantly, SSHGuard has been split into several processes piped into one another (sshg-logmon | sshg-parser | sshg-blocker | sshg-fw). sshg-parser can run with capsicum(4) and pledge(2). sshg-blocker can be sandboxed in its default configuration (without pid file, whitelist, blacklisting) and has not been tested sandboxed in other configurations.
pjd’s 2007 paper from AsiaBSDCon: “Porting the ZFS file system to the FreeBSD operating system”
A Message From the FreeBSD Foundation
Remembering Roger Faulkner, Unix Champion and A few HN comments (including Bryan Cantrill)
1,971 Listeners
272 Listeners
283 Listeners
265 Listeners
215 Listeners
154 Listeners
65 Listeners
189 Listeners
181 Listeners
44 Listeners
21 Listeners
135 Listeners
92 Listeners
29 Listeners
47 Listeners