Plex is a great piece of software, if you’ve never heard it before think of it as an easy to use service that runs on a computer at home that streams just about any format of audio/video to a smart TV, Apple TV, Roku, or modern console. You can even easily configure it so that your iOS device can stream content from your media server across the Internet. Perfect!
However, maybe your home Internet upload speed is not very good. Or you have a data cap. Or you are trying to upload a massive amount of data to Amazon or Backblaze for backups and you don’t need to make that process even slower by using precious upload bandwidth for Plex. This site is hosted on a VPS instance with way more disk space than I need for a small blog, so I’ve plenty of disk space and bandwidth to stream my music from that instead of my from my home computer.
First of all, it’s as simple as downloading the .deb file from Plex’s site and following the simple install instructions to get the service installed. Really the one and only hiccup I ran into (and the reason I decided to write this blog post about it) is that once you’ve installed the service it is expecting you to configure it by visiting http://localhost:32400/web. However it’s a command line only Linux environment and Lynx doesn’t get the job done (I tried).
After much Googling, all I could find was references to using ssh to setup a tunnel and changing your browser’s proxy setting so that the Plex service thought you were accessing it from the local machine. That was, in my experience, a bunch of crap and never worked. Eventually I found a forum post that simply said to edit the Plex config file that restricted the initial setup to only happen from the local host. A quick trip to https://www.whatismyip.com and a quick edit in vi, and I was in business.
Here’s all you have to do:
- Change into /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/
- Edit the Preferences.xml file
- There should be two lines, the second line is very long. It starts with Preferences in brackets.
- After that tag, add the following:
- For example, you’d put allowedNetworks=”188.8.131.52/255.255.255.255″
- Save the file, restart the Plex service, and POOF! You can now login and configure the server via http://server-ip-address:32400/web
After you configure the service, be sure you remove the “allowedNetworks” tag from the XML file and restart the service.
We’re having a fun snow day today in my part of the world. We’re expected to get somewhere between 12″ and 24″ of snow. As I type this, we’re at around 8″ of snow and it hasn’t stopped pouring the snow since around 9 AM this morning.
Now you might be wondering, just what does this have to do with the blog? Well I wanted a way to post some snow pictures to a forum, and didn’t want to use Imgur or Droplr. So I created a directory within /var/www/html on the server, set proper permissions, and started uploading photos from the camera roll on my iPhone. However I didn’t want to post photos with GPS data embedded in them. A quick trip to Google revealed a command line tool that’s perfect for the job. I had no idea exiftool existed, but it does and it does its job well. A quick run of the tool and my photos were metadata free.
Exiftool can be downloaded from your disto’s package system, and the quick example I used on how to use it can be found at Linux Magazine
So you may have noticed that the blog now accepts HTTPS connections! That’s right, https://www.thesysadminlife.com is now a working and valid URL. I joined the beta of Let’s Encrypt, it took about 5 minutes to setup and couldn’t have been easier (especially considering what a pain in the ass SSL certs have typically been).
This site runs on Apache, which is supported web server for Let’s Encrypt client. I got a copy of the latest code from Git, and ran the following command
./letsencrypt-auto --apache -d thesysadminlife.com -d www.thesysadminlife.com
It churned for a few minutes and then asked which Apache config file contains the virtual host settings for my site. I am running Debian on a VPS that was provisioned from scripts, so there were three options to pick from and I wasn’t sure which one was correct. My first attempt failed, so I re-ran the command above and picked the option to re-install the already provisioned cert. With a different choice, it succeeded and everything worked fine. I was also given the choice to redirect HTTP traffic to HTTPS traffic or to accept both. Since this site is just a personal blog, I chose to accept both types (for now).
One thing I didn’t know before starting this was the certificates from Let’s Encrypt are only valid for 90 days. I followed the instructions and easily setup a cron job that renews the cert every 60 days, giving me a month of buffer time in case something goes wrong.
It really was the best experience I’ve ever had when dealing with server certificates. I’m not sure how it could have been easier. I can completely recommend this service to anyone wanting to secure their site (though for an e-commerce site, perhaps a paid cert would be a better choice).
For setup instructions, check out the instructions over at Let’s Encrypt.
I ran into an issue the other day where a file on a network share ended up with its NTFS permissions being hosed in such a way that no one could edit, delete, or even take ownership of it. I’m not sure how it happened, but it did and the ticket ended up with me to get it fixed.
Nothing I did in the GUI could fix the problem. I could see the filesystem security attributes were hosed and nothing, not even taking ownership, would successfully complete. After a quick visit to Google, I found the Technet page for takedown.exe. It’s basically a tool for sysadmin’s to take ownership of a file with borked permissions. Perfect! That’s exactly what I need.
Unfortunately, it didn’t work and failed with a non-helpful generic error. Turns out I was having a case of the stupids and the file was locked by a crashed application. Killing the processes released the lock on the file and then I was able to delete the file and restore it from the previous days backup. On the plus side, I found what looks to be a great tool to keep bookmarked for the future!
So you may have noticed there has been a lack of posts recently. I promise I haven’t given up on the site, I’ve just had a lot going on lately so it seemed like a great time to shake things up and move web site hosts. You might be wondering why I would want to change hosts, especially when the site has only been around for a couple months, it really came down to a couple reasons.
First, for a technically oriented blog that’s specifically about being a Systems Administrator, it seemed like a cop out to have the site hosted at Squarspace and not a VPS like a good little sysadmin. As of today, the site is running on a 1024 tier VPS with Linode (here’s my Linode referral link). When I had previously tried WordPress, I couldn’t find a theme I liked. There were a few I tried that were OK, but nothing great. This theme (Simplified Blog) is lightweight, easy on the eyes, and features a responsive design that looks great on mobile devices. If something should happen that this theme doesn’t stay update with future revisions of WordPress, unlikely since so far the dev has done a good job of keeping it up to date, I can always revert back to the WordPress default 2016 theme. 2016 looks, to me, much better than previous years’ themes did.
At $10 a month, Linode is twice as expensive as a tiny VPS at Digital Ocean. But for the extra $5 a month, I get a few extra GBs of storage (nice to have, but not critical) and 1 GB of RAM vs 512 MB. WordPress runs on PHP and MySQL, both of which like their RAM. The extra performance that RAM grants is worth the extra money per month. Even though this site doesn’t have a lot of readers (yet), what Sys Admin can turn down extra RAM? 🙂
When setting up WordPress, by default the way it structures links isn’t pretty. Not pretty at all. I wanted to have links so that the URL would tell you what the article is about. For instance, the URL of this post is: http://www.thesysadminlife.com/wordpress-permalinks/. However, changing the permalink setting within the WordPress dashboard ends up with a lot of broken links. Everything I could find online simply said if you have WordPress and you change that setting in the dashboard, it takes care of everything for you. Except it didn’t. Or at least it didn’t for me.
Eventually I found that the problem was a setting within my apache config. Specifically this setting:
Options Indexes FollowSymLinks
Require all granted
Needed to be changed to:
Options Indexes FollowSymLinks
Require all granted
Change None to All
Once that was completed and Apache was restarted, everything worked beautifully.
Had a strange one happen today. We recently replaced an old Windows 2003 server with a new Server 2012 R2 server. Everything was going great until we recreated the scheduled tasks. One of the tasks was failing instantly every time it starts. The error message was:
“The Directory Name is Invalid”
After checking permissions for the domain account the task runs as, checking the various settings of the task, verifying the executable path is correct, and that the executable works when run manually as that user, I was out of ideas. Time to check Google.
Turns out the optional setting for the “Action” aspect of the task doesn’t like quotes. After removing the quotes from that field, the task worked fine.
For anyone out there rolling out desktop patches, watch out! Microsoft has re-issued a recent Windows 7 critical patch because it was crashing Outlook 2010 and 2013. The updated bulleton can be found here.
Looks like another serious bug has been found for vSphere 6. At the time of this post, no fix or patch has been released. It doesn’t affect the actual operation of VMs, but can cause incorrect values to be returned when CBT values are calculated. Which means any backup software that uses CBT (like 99.9% of most backup jobs) will end up with backed up data that isn’t restorable. I’m quite happy with my decsion to stay with 5.5. Get it together VMWare!