
Sign up to save your podcasts
Or


Papers we love: ARC by Bryan Cantrill, SSD caching adventures with ZFS, OpenBSD full disk encryption setup, and a Perl5 Slack Syslog BSD daemon.
When doing your research about xargs you did it for a purpose, right? You had a specific need that was reading standard output and executing commands based on that output.
Use man -k or apropos (they are equivalent). If I don't know how to find a file: man -k file | grep search. Read the descriptions and find one that will better fit your needs.
Always read the DESCRIPTION before starting
Take a time and read the description. By just reading the description of the xargs command we will learn that:
xargs reads from STDIN and executes the command needed. This also means that you will need to have some knowledge of how standard input works, and how to manipulate it through pipes to chain commands
Other things to pay attention...
You know that you can search for files using find. There is a ton of options and if you only look at the SYNOPSIS, you will get overwhelmed by those. It's just the tip of the iceberg. Excluding NAME, SYNOPSIS, and DESCRIPTION, you will have the following sections:
When this method will not work so well...
Generally, -v means verbose. -vvv is a variation "very very verbose" on some software.
At the pager chunk of this answer, we saw that less -is is the pager of man. The default behavior of commands are not always shown at a separated section on manpages, or at the section that is most top placed.
After getting all the information needed to execute the command, you can combine options, option-arguments and operands inline to make your job done. Overview of concepts:
Recently I decided to throw away my old defunct 2009 MacBook Pro which was rotting in my cupboard and I decided to retrieve the only useful part before doing so, the 80GB Intel SSD I had installed a few years earlier. Initially I thought about simply adding it to my desktop as a bit of extra space but in 2017 80GB really wasnt worth it and then I had a brainwave
Lets see if we can squeeze some additional performance out of my HP Microserver Gen8 NAS running ZFS by installing it as a cache disk.
Configuration
Lets have a look at the zpool before adding the cache drive to make sure there are no errors or uglyness:
Now lets prep the drive for use in the zpool using gpart. I want to split the SSD into two seperate partitions, one for L2ARC (read caching) and one for ZIL (write caching). I have decided to split the disk into 20GB for ZIL and 50GB for L2ARC. Be warned using 1 SSD like this is considered unsafe because it is a single point of failure in terms of delayed writes (a redundant configuration with 2 SSDs would be more appropriate) and the heavy write cycles on the SSD from the ZIL is likely to kill it over time.
Ok, so the initial result is a little dissapointing, but hardly unexpected, my NAS sucks and there are lots of bottle necks, CPU, memory and the fact only 2 of the SATA ports are 6gbps. There is no real difference performance wise in comparison between the results, the IOPS, bandwidth and latency appear very similar. However lets bare in mind fio is a pretty hardcore disk benchmark utility, how about some real world use cases?
Not bad once the data becomes hot in the L2ARC cache reads appear to gain a decent advantage compared to reading from the disk directly. How does it perform when writing the same file back accross the network using the ZIL vs no ZIL.
Another good result in the real world test, this certainately helps the write transfer speed however I do wonder what would happen if you filled the ZIL transferring a very large file, however this is unlikely with my use case as I typically only deal with a couple of files of several hundred megabytes at any given time so a 20GB ZIL should suit me reasonably well.
I would imagine with a big beefy ZFS server running in a company somewhere with a large disk pool and lots of users with multiple enterprise level SSD ZIL and L2ARC would be well worth the investment, however at home I am not so sure. Yes I did see an increase in read speeds with cached data and a general increase in write speeds however it is use case dependant. In my use case I rarely access the same file frequently, my NAS primarily serves as a backup and for archived data, and although the write speeds are cool I am not sure its a deal breaker. If I built a new home NAS today Id probably concentrate the budget on a better CPU, more RAM (for ARC cache) and more disks. However if I had a use case where I frequently accessed the same files and needed to do so in a faster fashion then yes, Id probably invest in an SSD for caching. I think if you have a spare SSD lying around and you want something fun todo with it, sure chuck it in your ZFS based NAS as a cache mechanism. If you were planning on buying an SSD for caching then Id really consider your needs and decide if the money can be spent on alternative stuff which would improve your experience with your NAS. I know my NAS would benefit more from an extra stick of RAM and a more powerful CPU, but as a quick evening project with some parts I had hanging around adding some SSD cache was worth a go.
Here is a quick way to setup (in 7 steps) OpenBSD 6.2 with the encryption of the filesystem.
First step: Boot and start the installation:
(I)nstall: I
Using a SSD, my disk is named : sd0, the name may vary, for example : wd0.
Let's resume the OpenBSD's installer. We follow the install procedure
We select our new volume, in this case: sd1
It's time to choose how we'll install our system (network install by http in my case)
Sixth step: Finalize the installation.
Last step: Reboot and start your system.
Put your passphrase. Welcome to OpenBSD 6.2 with a full encrypted file system.
The swap is actually part of the encrypted filesystem, we don't need OpenBSD to encrypt it. Sysctl is giving us this possibility.
In the past week, I read the ldd source code on OpenBSD to get a better understanding of how it works. And this post should also be a reference for other*NIX OSs.
When LD_TRACE_LOADED_OBJECTS is set to 1 or true, running executable file will show shared objects needed instead of running it, so you even not needldd to check executable file. See the following outputs:
Why the condition of checking a ELF file is shared object or not is like this:
Thats because the file type of position-independent executable (PIE) is the same as shared object, but normally PIE contains a interpreter program header since it needs dynamic linker to load it while shared object lacks (refer this article). So the above condition will filter PIE file.
In fact, you can also implement a simple application which outputs dynamic object information for shared object yourself:
Compile and use it to analyze /usr/lib/libssl.so.43.2:
The same as using ldd directly:
Through the studying of ldd source code, I also get many by-products: such as knowledge of ELF file, linking and loading, etc. So diving into code is a really good method to learn *NIX deeper!
So I have been working on my little Perl daemon for a week now.
The situation arose today that the internet went down and I thought to myself what would happen to all my important syslog messages when they couldnt be sent? Before the script only ran an eval block on the botsend() function. The error was returned, handled, but nothing was done and the unsent message was discarded. So I added a function that appended unsent messengers to an array that are later sent when the server is not busy sending messages to slack.
There is a neat command, lscpu, which is very handy to display CPU information on GNU/Linux OS:
But unfortunately, the BSD OSs lack this command, maybe one reason is lscpu relies heavily on /proc file system which BSD dont provide, :-). TakeOpenBSD as an example, if I want to know CPU information, dmesg should be one choice:
But the output makes me feeling messy, not very clear. As for dmidecode, it used to be another option, but now cant work out-of-box because it will access /dev/mem which for security reason, OpenBSD doesnt allow by default (You can refer this discussion):
Based on above situation, I want a specified command for showing CPU information for my BSD box. So in the past 2 weeks, I developed a lscpu program for OpenBSD/FreeBSD, or more accurately, OpenBSD/FreeBSD on x86 architecture since I only have some Intel processors at hand. The application getsCPU metrics from 2 sources:
By JT Pennington4.8
9191 ratings
Papers we love: ARC by Bryan Cantrill, SSD caching adventures with ZFS, OpenBSD full disk encryption setup, and a Perl5 Slack Syslog BSD daemon.
When doing your research about xargs you did it for a purpose, right? You had a specific need that was reading standard output and executing commands based on that output.
Use man -k or apropos (they are equivalent). If I don't know how to find a file: man -k file | grep search. Read the descriptions and find one that will better fit your needs.
Always read the DESCRIPTION before starting
Take a time and read the description. By just reading the description of the xargs command we will learn that:
xargs reads from STDIN and executes the command needed. This also means that you will need to have some knowledge of how standard input works, and how to manipulate it through pipes to chain commands
Other things to pay attention...
You know that you can search for files using find. There is a ton of options and if you only look at the SYNOPSIS, you will get overwhelmed by those. It's just the tip of the iceberg. Excluding NAME, SYNOPSIS, and DESCRIPTION, you will have the following sections:
When this method will not work so well...
Generally, -v means verbose. -vvv is a variation "very very verbose" on some software.
At the pager chunk of this answer, we saw that less -is is the pager of man. The default behavior of commands are not always shown at a separated section on manpages, or at the section that is most top placed.
After getting all the information needed to execute the command, you can combine options, option-arguments and operands inline to make your job done. Overview of concepts:
Recently I decided to throw away my old defunct 2009 MacBook Pro which was rotting in my cupboard and I decided to retrieve the only useful part before doing so, the 80GB Intel SSD I had installed a few years earlier. Initially I thought about simply adding it to my desktop as a bit of extra space but in 2017 80GB really wasnt worth it and then I had a brainwave
Lets see if we can squeeze some additional performance out of my HP Microserver Gen8 NAS running ZFS by installing it as a cache disk.
Configuration
Lets have a look at the zpool before adding the cache drive to make sure there are no errors or uglyness:
Now lets prep the drive for use in the zpool using gpart. I want to split the SSD into two seperate partitions, one for L2ARC (read caching) and one for ZIL (write caching). I have decided to split the disk into 20GB for ZIL and 50GB for L2ARC. Be warned using 1 SSD like this is considered unsafe because it is a single point of failure in terms of delayed writes (a redundant configuration with 2 SSDs would be more appropriate) and the heavy write cycles on the SSD from the ZIL is likely to kill it over time.
Ok, so the initial result is a little dissapointing, but hardly unexpected, my NAS sucks and there are lots of bottle necks, CPU, memory and the fact only 2 of the SATA ports are 6gbps. There is no real difference performance wise in comparison between the results, the IOPS, bandwidth and latency appear very similar. However lets bare in mind fio is a pretty hardcore disk benchmark utility, how about some real world use cases?
Not bad once the data becomes hot in the L2ARC cache reads appear to gain a decent advantage compared to reading from the disk directly. How does it perform when writing the same file back accross the network using the ZIL vs no ZIL.
Another good result in the real world test, this certainately helps the write transfer speed however I do wonder what would happen if you filled the ZIL transferring a very large file, however this is unlikely with my use case as I typically only deal with a couple of files of several hundred megabytes at any given time so a 20GB ZIL should suit me reasonably well.
I would imagine with a big beefy ZFS server running in a company somewhere with a large disk pool and lots of users with multiple enterprise level SSD ZIL and L2ARC would be well worth the investment, however at home I am not so sure. Yes I did see an increase in read speeds with cached data and a general increase in write speeds however it is use case dependant. In my use case I rarely access the same file frequently, my NAS primarily serves as a backup and for archived data, and although the write speeds are cool I am not sure its a deal breaker. If I built a new home NAS today Id probably concentrate the budget on a better CPU, more RAM (for ARC cache) and more disks. However if I had a use case where I frequently accessed the same files and needed to do so in a faster fashion then yes, Id probably invest in an SSD for caching. I think if you have a spare SSD lying around and you want something fun todo with it, sure chuck it in your ZFS based NAS as a cache mechanism. If you were planning on buying an SSD for caching then Id really consider your needs and decide if the money can be spent on alternative stuff which would improve your experience with your NAS. I know my NAS would benefit more from an extra stick of RAM and a more powerful CPU, but as a quick evening project with some parts I had hanging around adding some SSD cache was worth a go.
Here is a quick way to setup (in 7 steps) OpenBSD 6.2 with the encryption of the filesystem.
First step: Boot and start the installation:
(I)nstall: I
Using a SSD, my disk is named : sd0, the name may vary, for example : wd0.
Let's resume the OpenBSD's installer. We follow the install procedure
We select our new volume, in this case: sd1
It's time to choose how we'll install our system (network install by http in my case)
Sixth step: Finalize the installation.
Last step: Reboot and start your system.
Put your passphrase. Welcome to OpenBSD 6.2 with a full encrypted file system.
The swap is actually part of the encrypted filesystem, we don't need OpenBSD to encrypt it. Sysctl is giving us this possibility.
In the past week, I read the ldd source code on OpenBSD to get a better understanding of how it works. And this post should also be a reference for other*NIX OSs.
When LD_TRACE_LOADED_OBJECTS is set to 1 or true, running executable file will show shared objects needed instead of running it, so you even not needldd to check executable file. See the following outputs:
Why the condition of checking a ELF file is shared object or not is like this:
Thats because the file type of position-independent executable (PIE) is the same as shared object, but normally PIE contains a interpreter program header since it needs dynamic linker to load it while shared object lacks (refer this article). So the above condition will filter PIE file.
In fact, you can also implement a simple application which outputs dynamic object information for shared object yourself:
Compile and use it to analyze /usr/lib/libssl.so.43.2:
The same as using ldd directly:
Through the studying of ldd source code, I also get many by-products: such as knowledge of ELF file, linking and loading, etc. So diving into code is a really good method to learn *NIX deeper!
So I have been working on my little Perl daemon for a week now.
The situation arose today that the internet went down and I thought to myself what would happen to all my important syslog messages when they couldnt be sent? Before the script only ran an eval block on the botsend() function. The error was returned, handled, but nothing was done and the unsent message was discarded. So I added a function that appended unsent messengers to an array that are later sent when the server is not busy sending messages to slack.
There is a neat command, lscpu, which is very handy to display CPU information on GNU/Linux OS:
But unfortunately, the BSD OSs lack this command, maybe one reason is lscpu relies heavily on /proc file system which BSD dont provide, :-). TakeOpenBSD as an example, if I want to know CPU information, dmesg should be one choice:
But the output makes me feeling messy, not very clear. As for dmidecode, it used to be another option, but now cant work out-of-box because it will access /dev/mem which for security reason, OpenBSD doesnt allow by default (You can refer this discussion):
Based on above situation, I want a specified command for showing CPU information for my BSD box. So in the past 2 weeks, I developed a lscpu program for OpenBSD/FreeBSD, or more accurately, OpenBSD/FreeBSD on x86 architecture since I only have some Intel processors at hand. The application getsCPU metrics from 2 sources:

271 Listeners

290 Listeners

2,010 Listeners

268 Listeners

585 Listeners

164 Listeners

91 Listeners

70 Listeners

189 Listeners

46 Listeners

22 Listeners

98 Listeners

29 Listeners

62 Listeners

22 Listeners