From the Terminal

V4L2 Notes for Linux

Find out what is using capture devices

fuser /dev/video0

Find out name of process

ps axl | grep ${PID}

Show info

v4l2-ctl --all

Set resolution and frame rate

v4l2-ctl -v width=1920,height=1080 --set-parm=60
v4l2-ctl -v width=3840,height=2160 --set-parm=30

Reset USB Device

usbreset "Live Streaming USB Device"

Show Stream in GUI

qvidcap

How to make a launcher for Spotify in Linux that works with Spotify links

When launching Spotify in Linux by default the deb package comes with a desktop file that contains an exec that essentially just runs the "spotify" binary. Spotify does have a dbus interface for certain things. We can look for an existing process and if one is found we throw the Spotify link into dbus instead of opening another Spotify instance.

I wrote a simple launcher replacement. Just set your desktop file to use this instead of the default.

Slack magic login broken on Linux with KDE

What's going on?

kde-open5 is breaking the links by making all characters lowercase.

Get the link by running:

while sleep .1; do ps aux | grep slack | grep -v grep | grep magic; done

Then just run `xdg-open [the link]`

The link should look like this:

slack://T8PPLKX2R/magic-login/3564479012256-b761e747066f87b708f43d5c0290bb076f276b121486a5fb6376af0dd7169e7d

Cajun Style Fried Rice

In a dutch oven

  • Garlic - 2 cloves, diced
  • Butter - 2 tbs
  • Chicken / Beef stock - Whole 28 oz Carton
  • Carrots - Half a cup, diced
  • Red Pepper - 1 tbs 
  • Cajun style mix - 4 tbs 
  • Whole dried red peppers - 2-3
  • Medium grain rice - 2 cups

Throw butter into the dutch oven with the garlic and spices on high, add stock, whisk, add carrots, whisk.

Once the mixture is steaming add the rice. Whisk again.

Wait for boil then lower heat to a simmer and cover. Mix every 5 minutes until liquid is fully absorbed by the rice. No more than 15-20 minutes.

Serve with lime wedges to squeeze over the rice. Fresh cilantro is also a great garnish.

Pair with any protein.

Compiling and Installing RocketRaid RR64xL drivers on Linux 5.x

wget https://raw.githubusercontent.com/dsiggi/RocketRAID/master/sys-block/rr64xl/files/rr64xl-kernel-5.patch
wget http://www.highpoint-tech.com/BIOS_Driver/RR64xL/Linux/RR64xl_Linux_Src_v1.4.0_16_09_20.tar.gz
tar xf RR64xl_Linux_Src_v1.4.0_16_09_20.tar.gz
patch < rr64xl-kernel-5.patch
cd rr64xl-linux-src-v1.4.0/product/rr64xl/linux/
sudo make KERNELDIR=/usr/src/linux

Installing XDebug on anything for VSCode in 5 minutes (XDebug 3.x)

This guide is for XDebug 3.x only.

For XDebug 2.x clear here.

 

I see a lot of over complicated guides on XDebug so I'll simplify things real quick for everyone.

Visual Studio Code has debugging support out of the box. Click on the Debug icon in the left bar (OS X: ⇧⌘D, Windows / Linux: CTRL+SHIFT+D) then click on the cog icon which should open your launch.json file or create one if none exists.


You must have the PHP XDebug extension installed of course.

Now add this to your launch.json file you have open:

        {
            "type": "php",
            "request": "launch",
            "name": "Listen For XDebug",
            "port": 9003,
            "pathMappings": {
                "/var/www/": "${workspaceRoot}"
            },
            "xdebugSettings": {
                "max_children": 256,
                "max_data": -1,
                "max_depth": 5
            },
            "ignore": [
                "**/vendor/**/*.php"
            ]
        }

Make sure you change /var/www/ to where your code is on your local server.

Set this in your php.ini

[xdebug]
xdebug.mode = debug
xdebug.start_with_request = yes
xdebug.idekey = VSCODE
xdebug.client_port = 9003
xdebug.client_host = "127.0.0.1"
xdebug.discover_client_host  = 1
xdebug.log="/tmp/xdebug.log"
xdebug.cli_color = 1

You might need to do it twice. Once for CLI and once for PHP-FPM!

 

Typical locations for your php.ini file:

  • Linux: /etc/php/{$version}/php.ini
  • macOS (Homebrew): /usr/local/etc/php/{$version}/php.ini

Don't forget to restart php-fpm!

Now start the debugger by hitting the green play button.

How Connecting to Cloudflare's Warp VPN Can Change Your Data Center Depending On Whether You Are Using IPv4 or IPv6

I recently moved to Maui, HI and have a Spectrum 960/20 land line hooked up. Cloudflare peers with DrFortress in Honolulu which allows me to ping 1.1.1.1 and get some incredible response times. At least for Hawaii:

    64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=13.7 ms

Awesome right?

Well then I wanted to figure out how to play games on Japanese servers. For American servers I get about 40-60 ms for west coast and 110-130 for east coast. In particular my DigitalOcean VPS in NYC responds in 125 ms.

For my testing I'm using the server IPs for the game FFXIV. In this case one of the servers in the "Mana" Data Center in Japan.

Here's what I get pinging the data center with no VPN.

    ping 124.150.157.49

    64 bytes from 124.150.157.49: icmp_seq=1 ttl=47 time=177 ms

Not ideal. Of course it's because it goes through LA on it's way to Japan. Doing a traceroute confirms that. But I knew going into this that there was a direct line to Japan from Honolulu and it is, in fact, possible to send a network packet directly to Japan from Hawaii. Unfortunately the routing depended on your ISP's routing to make it go directly. So then I started playing with VPNs.

I discovered that Cloudflare actually has a free VPN called Warp and while it doesn't provide the best throughput there's no real issue with using it for low bandwidth game connections and simple web browsing.

Trying it Out

I have a Linux desktop running Gentoo and a Macbook Pro. To test this I went ahead and setup Cloudflare Warp on my Macbook and pinged the same IP. Setting it up on MacOS is as simple as installing the official app.

So let's see what we get on the Macbook.

    64 bytes from 124.150.157.49: icmp_seq=1 ttl=54 time=135.54 ms

A whole 40 ms shaved off the ping. Time to install it on my desktop!

On Linux there's an unofficial CLI utility called wgcf which will build you a Wireguard configuration you can use to connect to the service. I had to recompile my kernel to allow wireguard but no big deal. I ping the same IP on my desktop now. Again same internet but this time a wired ethernet connection instead of wifi.

    64 bytes from 124.150.157.49: icmp_seq=1 ttl=51 time=160 ms

Huh?!? How could this be? I was promised speed!!!!

Well it turns out after visiting https://www.cloudflare.com/cdn-cgi/trace I found out what the issue was.

On my desktop I'm getting assigned to the LA data center. But on my laptop I'm getting assigned to Honolulu. All my research on this topic suggests that the "closest" is chosen via UDP broadcast probing but I wasn't entirely convinced so I went digging.

It turns out if you look at the file wgcf-profile.conf it will have this setting:

    Endpoint = engage.cloudflareclient.com:2408

With IPv6 the IP I get when resolving the above hostname is an IPv6 address which is sending me to the LAX data center. When I resolved the hostname manually with nslookup I got 162.159.192.1 for IPv4 and I was able to disregard the IPv6 address. I changed the endpoint setting to use this IPv4 address instead and now my Wireguard connection goes to the HNL data center.

Making Patches for Portage Packages

When you run emerge portage does a series of things to your package in order:

  • fetch
  • unpack
  • prepare
  • compile

To set up a patch working environment we're going to go ahead and let it do all of that up to prepare. We will do this with the ebuild command:

ebuild file.ebuild fetch
ebuild file.ebuild unpack
ebuild file.ebuild prepare

 

Now we copy the work directory to our own directory:

sudo cp -R /var/tmp/portage/www-client/ungoogled-chromium-88.0.4324.150-r2/work ~/Sources/ungoogled-chromium-88.0.4324.150-r2-work
sudo chown -R akujin:users /var/tmp/portage/www-client/ungoogled-chromium-88.0.4324.150-r2/work ~/Sources/ungoogled-chromium-88.0.4324.150-r2-work

 

Now we have our work directory where we can build our patch. The patch will be applied after prepare is done running so that is why we copied the work directory right after prepare was done running. In our copy of the work directory we need to initialize the folder as a git repo so we go ahead and run:

git init
git add .
git commit

 

Now we can make the modifications we want to make. After you're done simply stage your changes and run:

sudo sh -c "git diff --staged > /etc/portage/patches/www-client/ungoogled-chromium/macos-hotkeys-on-linux.patch"

 

You can test your new patch simply by running emerge on your package.

How to securely share a network mount through an existing SSH server

I recently wanted to share a bunch of files with a friend over the internet. These files were on my mediacenter which is a Windows machine running a RAID. This machine was already sharing these files on a Windows file share but I didn't want to expose these files to the internet. Since my workstation was a Linux box already running SSH server on port 22 with SSH key authentication already working I figured the cleanest thing would be to add a new user and set the home directory to the network share with the shell disabled. Easier said than done so I'm writing this post to discuss some blockers I encountered and how I solved them.

First I needed to add a new user. This is the command I used in Gentoo however in most Linux distros you will likely need to use adduser instead.

sudo useradd ftp -M -N -s /sbin/nologin -R /dev/null
  • -M Don't create a home directory
  • -N No user group
  • -s /sbin/nologin sets the shell to nologin to disable any logins
  • -R /dev/null sets the home directory to /dev/null to disable the home directory

Now we need to edit sshd_config for this specific user.

Near the bottom of the file we add this to the config.

Match User ftp
        AuthorizedKeysFile /etc/ssh/authorized_keys_%u
        ForceCommand internal-sftp
        AllowTCPForwarding no
        X11Forwarding no
        ChrootDirectory /home/ftp

Since the AuthorizedKeysFile depends on having a home directory by default we must override that setting to specify a different one for just this one user. Configure that as you would normally.

For security reasons we disable AllowTCPForwarding so people can't use your SFTP as a free proxy and X11 forwarding so that people can't access your X session. Finally we need to use a chroot directory.

 

My initial instinct was to directly use my media center mount as /mnt/mediacenter/ but this was failing since that directory was a network share. SFTP server requires that the chroot directory is owned by root. So despite us setting this user to not use a home directory I decided to use /home/$user for that reason anyway. But in this case I set it as owned by root instead of the user like normal. Now I had to setup some link to the mediacenter mount. Soft links will not work with SFTP so instead I used the mount command like so.

sudo mount --bind /mnt/mediacenter/Library/Anime/ /home/ftp/Anime

Now we can test it. I decided to use my laptop.

There you have it. A fully secure SFTP over SSH.

This technique should work well with WSL1 and WSL2 as well if you're using Windows.

Maui Turtle

Just testing the social media image embedding