[{"content":"I\u0026rsquo;ve long used OpenVPN\u0026rsquo;s PtP tunnels to set up star-style network topologies across the WAN, with dynamic routing set up using OSPF/Quagga.\nWireGuard is new, allows simpler configuration, and is measurably faster than OpenVPN, so naturally I wanted to switch to it.\nHowever, WireGuard seems to be aimed at smaller, simpler use cases, with its AllowedIPs configuration option being used to set up both static routes, as well as a form of allowlist regarding what traffic is allowed to flow through the tunnel. With this, I would have needed to hard code all subnets across the network in each end\u0026rsquo;s AllowedIPs, which would have prevented taking advantage of routing protocols to dynamically set up routes.\nAfter a long time of wanting to be able to set up PtP WireGuard tunnels, with traffic allowed to flow freely within the tunnel a la OpenVPN, I finally found the answer.\nThis answer is not well known, to the point of people on mailing lists describing it as impossible. See this.\nWireguard For this to work, using wg-quick, we need to utilize both the (uncommon) Table option as well as AllowedIPs.\nThe following is a simple working example.\nRouter A (acting as server):\nTunnel IP: 10.8.0.1/24\nAlso has access to: 10.30.0.0/24\nWireguard Conf (/etc/wireguard/wg2.conf):\n[Interface] Address = 10.8.0.1/24 PrivateKey = RouterA\u0026#39;sPrivKey ListenPort = 8999 Table = off # This is the crucial part [Peer] PublicKey = RouterBsPubKey AllowedIPs = 0.0.0.0/0 Router B (acting as client).\nTunnel IP: 10.8.0.2/24\nAlso has access to: 192.168.122.0/24 and 10.0.0.0/24\nWireguard Conf (/etc/wireguard/wg2.conf):\n[Interface] PrivateKey = RouterB\u0026#39;sPrivKey Address = 10.8.0.2/24 Table = off # This is the crucial part [Peer] PublicKey = RouterAsPubKey AllowedIPs = 0.0.0.0/0 Endpoint = RouterA\u0026#39;sWanIP:8999 PersistentKeepalive = 30 # Needed if B is behind a NAT To explain: ordinarily when you set a peer\u0026rsquo;s AllowedIps to 0.0.0.0/0, this causes WireGuard to force all outgoing traffic to go through the tunnel by altering the host\u0026rsquo;s routing table to replace the default route.\nThe magic is when you set \u0026lsquo;Table = off\u0026rsquo; the host\u0026rsquo;s routing table is untouched, and any traffic will now be allowed to float between the tunnel, with the potential for the user to set up any custom routes they want outside of WireGuard\u0026rsquo;s configuration.\nAfter configuring the conf files, immediately enable the tunnels using, on either end (as we chose identical tunnel names) the following. Tested on Ubuntu and Alma/Fedora.\nsystemctl enable --now wg-quick@wg2 OSPF Now that we have a tunnel that allows any traffic to flow, just like with OpenVPN, we need custom routes so all nodes connected to these subnets can easily find each other.\nOSPF works well; I used Quagga in the past, but wanted to give Bird a try.\nThe bird conf for each Router is very simple, with only the wireguard interface name and stubnets being changed.\nThe following is for Router A.\n## Boilerplate from distro log syslog all; protocol device { } protocol direct { disabled;\t# Disable by default ipv4;\t# Connect to default IPv4 table ipv6;\t# ... and to default IPv6 table } protocol kernel { ipv4 {\t# Connect protocol to IPv4 table by channel export all;\t# Export to protocol. default is export none }; persist; } protocol kernel { ipv6 { export all; }; } protocol static { ipv4;\t# Again, IPv4 channel with default options } ## Sauce protocol ospf MyOSPF { ## Boilerplate taken from Bird\u0026#39;s example docs https://bird.network.cz/?get_doc\u0026amp;v=20\u0026amp;f=bird-6.html#ss6.8 ipv4 { export filter { if source = RTS_BGP then { ospf_metric1 = 100; accept; } reject; }; }; area 0.0.0.0 { ## What matters stubnet 10.30.0.0/24; interface \u0026#34;wg2\u0026#34; { type ptp; # VPN tunnels should be point-to-point }; }; } We define the WireGuard interface as type \u0026ldquo;ptp\u0026rdquo; as it is a tunnel.\nAny accessed subnets or routes to be advertised, that are not attached to other interfaces defined, need to be configured as stubnets.\nI prefer to only configure the WireGuard interface as this way unnecessary OSPF traffic is not sent to the other interfaces.\nThe Bird conf for Router B is the same as A except for the stubnet params, as both ends have their WireGuard interface named \u0026ldquo;wg2\u0026rdquo; for simplicity.\nAfter both wireguard and bird are running, you can see the routes from ospf being created on the client host above:\n[root@machina ~]# ip -4 ro *snip* 10.30.0.0/24 via 10.8.0.1 dev wg2 proto bird metric 32 *snip* [root@machina ~]# birdc show ro BIRD 2.0.8 ready. Table master4: 10.30.0.0/24 unicast [MyOSPF 2021-11-21] * I (150/20) [10.8.0.1] via 10.8.0.1 on wg2 ","permalink":"http://localhost:1313/2021/11/24/using-wireguard-with-ospf-and-bird/","summary":"\u003cp\u003eI\u0026rsquo;ve long used OpenVPN\u0026rsquo;s PtP tunnels to set up star-style network topologies across the WAN, with dynamic routing set up using OSPF/Quagga.\u003c/p\u003e\n\u003cp\u003eWireGuard is new, allows simpler configuration, and is measurably faster than OpenVPN, so naturally I wanted to switch to it.\u003c/p\u003e\n\u003cp\u003eHowever, WireGuard seems to be aimed at smaller, simpler use cases, with its AllowedIPs configuration option being used to set up both static routes, as well as a form of allowlist regarding what traffic is allowed to flow through the tunnel. With this, I would have needed to hard code all subnets across the network in each end\u0026rsquo;s AllowedIPs, which would have prevented taking advantage of routing protocols to dynamically set up routes.\u003c/p\u003e","title":"Using WireGuard with OSPF and Bird"},{"content":"My mailserver\u0026rsquo;s root partition has gotten really full lately, and I didn\u0026rsquo;t want to incur downtime by taking it down to enlarge offline:\nroot@mail:~# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/vda2 ext4 19G 17G 1.2G 94% / I run my VMs on KVM and I use an LVM LV for each VM, and according to a SO post, it is possible to increase the size partitions within a VM online, so let\u0026rsquo;s go for it:\nFirstly, ensure there is enough free VG space to make room for the slice and check its current size:\nroot@fireball:~# vgs VG #PV #LV #SN Attr VSize VFree vg1 2 16 0 wz--n- 836.74g 143.74g Next, increase the LV:\nroot@fireball:~# lvresize -L +10G /dev/vg1/vm-mail Size of logical volume vg1/vm-mail changed from 20.00 GiB (5120 extents) to 30.00 GiB (7680 extents). Logical volume vm-mail successfully resized. Now that the LV is physically larger, we need to make KVM notify the guest that its disk has increased. First, get the virtio identifier:\nroot@fireball:~# virsh qemu-monitor-command mail info block --hmp drive-virtio-disk0: /dev/vg1/vm-mail (raw) Next, use that identifier like this:\nroot@fireball:~# virsh qemu-monitor-command mail block_resize drive-virtio-disk0 30G --hmp root@mail:~# Now, ensure dmesg within the VM indicates it has been notified of the size increase:\nroot@mail:~# dmesg -T [Fri Oct 18 20:21:53 2019] vda: detected capacity change from 21474836480 to 32212254720 Whenever I make a VM, I always make swap really tiny as vda1, and the rest goes to the root partition as vda2, so increases are always possible without a full reinstall.\nNow, we need to physically resize the partition within the VM. We\u0026rsquo;re going to use fdisk to first take note of the existing root partition and its starting sector, then delete it, recreate it at the same start sector, and re-apply the boot flag.\nFirst, learn what the disk layout currently is and the current start sector for the root partition:\nroot@mail:~# fdisk -l Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x6e63ab94 Device Boot Start End Sectors Size Id Type /dev/vda1 2048 2099199 2097152 1G 82 Linux swap / Solaris /dev/vda2 * 2099200 41943039 39843840 19G 83 Linux Now we\u0026rsquo;re ready to delete and recreate the partition\nroot@mail:~# fdisk /dev/vda Welcome to fdisk (util-linux 2.25.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): d Partition number (1,2, default 2): Partition 2 has been deleted. Command (m for help): n Partition type p primary (1 primary, 0 extended, 3 free) e extended (container for logical partitions) Select (default p): Using default response p. Partition number (2-4, default 2): First sector (2099200-62914559, default 2099200): 2099200 Last sector, +sectors or +size{K,M,G,T,P} (2099200-62914559, default 62914559): Created a new partition 2 of type \u0026#39;Linux\u0026#39; and of size 29 GiB. Command (m for help): p Disk /dev/vda: 30 GiB, 32212254720 bytes, 62914560 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x6e63ab94 Device Boot Start End Sectors Size Id Type /dev/vda1 2048 2099199 2097152 1G 82 Linux swap / Solaris /dev/vda2 2099200 62914559 60815360 29G 83 Linux Command (m for help): a Partition number (1,2, default 2): The bootable flag on partition 2 is enabled now. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Re-reading the partition table failed.: Device or resource busy The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8). root@mail:~# That warning at the end is expected and can be safely ignored. Last step is to resize the filesystem itself. Luckily ext4 lets you do this online in a single easy command:\nroot@mail:~# resize2fs /dev/vda2 resize2fs 1.42.12 (29-Aug-2014) Filesystem at /dev/vda2 is mounted on /; on-line resizing required old_desc_blocks = 2, new_desc_blocks = 2 The filesystem on /dev/vda2 is now 7601920 (4k) blocks long. root@mail:~# There, much better:\nroot@mail:~# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/vda2 ext4 29G 17G 11G 61% / This method is slightly dangerous if done wrong, but ideally you keep daily backups, right? Besides, why incur needless downtime and ruin a 320+ day uptime?\n","permalink":"http://localhost:1313/2019/10/18/live-increasing-a-kvm-vms-root-partition/","summary":"\u003cp\u003eMy mailserver\u0026rsquo;s root partition has gotten really full lately, and I didn\u0026rsquo;t want to incur downtime by taking it down to enlarge offline:\u003c/p\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;\"\u003e\u003ccode class=\"language-fallback\" data-lang=\"fallback\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003eroot@mail:~# df -hT\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003eFilesystem     Type      Size  Used Avail Use% Mounted on\n\u003c/span\u003e\u003c/span\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003e/dev/vda2      ext4       19G   17G  1.2G  94% /\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cp\u003eI run my VMs on KVM and I use an LVM LV for each VM, and according to a \u003ca href=\"https://serverfault.com/a/724156\"\u003eSO post\u003c/a\u003e, it is possible to increase the size partitions within a VM online, so let\u0026rsquo;s go for it:\u003c/p\u003e","title":"Live increasing a KVM VM's root partition"},{"content":"I\u0026rsquo;ve been building computers/servers for roughly 11 years now\u0026ndash;first build in 2006 was an Athlon 64 3800+ with a Geforce 6600\u0026ndash;and I\u0026rsquo;ve always only cared about the components themselves, less so what the case or innards look like.\nI hit a sort of \u0026ldquo;midlife crisis\u0026rdquo; where I wanted to make my home box look really cool. At the time it had 6 hard drives\u0026ndash;two mdadm raid1 arrays + 2 SSDs carved using LVM\u0026ndash;and I wanted to upgrade to a case with a window, as I was using a windowless Nanoxia Deep Silence 4. The DS4 is a great case but it\u0026rsquo;s not meant for flashyness. When I built it I hadn\u0026rsquo;t cared about cable management so it was a mess on the inside.\nI opted to go with a Phanteks P400s TG Red Edition as it had a huge tempered glass side panel, LED lights, and room for 8 drives: 2 SSDs behind the motherboard panel, 2 3.5\u0026quot; slots in the basement, and 4 optional drive slots in the front.\nAt some point I was convinced to try liquid cooling. I was able to fit a Corsair h80i v2 at the front of the case and it worked really well.\nYet I wanted more. I spent a month researching custom liquid cooling loops, mostly through reading EKWB\u0026rsquo;s excellent guides and watching Jayz2cents awesome water cooling tutorial videos.\nI chose the P400s primarily for the drive capacity; at the time I was still averse to liquid cooling so I didn\u0026rsquo;t plan for it. The P400s falls short for liquid cooling in that the top can\u0026rsquo;t fit dual fans + a radiator as the motherboard is too high, and the length of the case doesn\u0026rsquo;t easily permit a custom loop with a full length graphics card. The basement has\u0026ndash;undocumented\u0026ndash;pump mounting holes but they aren\u0026rsquo;t useful when you have a radiator+fans installed.\nI opted to go with EKWB parts and a single 240mm rad in the front of the case with the pump+res combo mounted against the rad itself. It all came together quite nicely with a few caveats, after i migrated down from 6 drives to 4 to remove the extras visible in the case taking up radiator space.\nThe pump+res combo mounted easily to the front of the radiator:\nWith all of the parts in place, the leak test (distilled water) was uneventful, likely due to using compression fittings:\nHere is the finished result:\nThere are several things I want to make better:\nThe drainage port should be less prominent There isn\u0026rsquo;t room for a normal-length video card with the pump where it is The CPU specs are a bit outdated and that part of this system is due for an upgrade Future plans:\n- Instead of mounting the pump+res against the radiator, drill holes at the bottom right part of the case so the reservoir tube is just flush against the radiator, to free up lots of room for a full length GFX.\n- Get a new mobo/ram, with the i7-8700k when it\u0026rsquo;s back in stock\n- Upgrade to a GTX 1080 with a water block, so I can add that to this loop\n- At some point go with rigid tubing, now that I\u0026rsquo;ve had a good experience with soft tubing\nIt\u0026rsquo;s possible I\u0026rsquo;ll need more radiator space, and I\u0026rsquo;ll need to decide between adding 120mm radiators in the rear and the top-right part of the case, or going with a different case altogether. I\u0026rsquo;d like to avoid going with a separate case as alternatives either feel wasteful, look ugly, or are too big.\nThe Phanteks Enthoo Evolve is pretty much meant for custom liquid cooling loops, but I hate how the front of it looks. The Fractal Design series would work, but they just have window/plastic side panels instead of tempered glass.\nNotable EKWB parts used:\n- EK-CoolStream PE 240 (Dual) - EK-XRES 140 Revo D5 PWM (incl. pump) - EK-Supremacy EVO CPU Water Block (Nickel) - EK-CryoFuel Blood Red Premix 900 mL - EK-AF Ball Valve (10mm) G1/4 - Nickel (for drain port) - EK-AF T-Splitter 3F G1/4 - Nickel (for drain port) - 2x EK-Vardar EVO 120S (1150rpm) - EK-ACF Fitting 10/16mm - Red (6-pack) - EK-UNI Pump Bracket (120mm FAN) Vertical - EK-DuraClear 9,5/15,9mm 3M\n","permalink":"http://localhost:1313/2017/11/10/phanteks-p400s-custom-liquid-cooling-loop/","summary":"\u003cp\u003eI\u0026rsquo;ve been building computers/servers for roughly 11 years now\u0026ndash;first build in 2006 was an Athlon 64 3800+ with a Geforce 6600\u0026ndash;and I\u0026rsquo;ve always only cared about the components themselves, less so what the case or innards look like.\u003c/p\u003e\n\u003cp\u003eI hit a sort of \u0026ldquo;midlife crisis\u0026rdquo; where I wanted to make my home box look really cool. At the time it had 6 hard drives\u0026ndash;two mdadm raid1 arrays + 2 SSDs carved using LVM\u0026ndash;and I wanted to upgrade to a case with a window, as I was using a windowless \u003ca href=\"http://nanoxia-world.com/en/products/cases/deep-silence-series/deep-silence-4/217/deep-silence-4-dark-black?c=44\"\u003eNanoxia Deep Silence 4\u003c/a\u003e. The DS4 is a great case but it\u0026rsquo;s not meant for flashyness. When I built it I hadn\u0026rsquo;t cared about cable management so it was a mess on the inside.\u003c/p\u003e","title":"Phanteks p400s Custom Liquid Cooling Loop"},{"content":"I have a Sony bluetooth speaker I usually use with iPhone and Macbooks. I\u0026rsquo;ve wanted to use it with my Ubuntu Xenial (4.4.0-93-generic) desktop for a long time but never got around to getting a bluetooth dongle or an RCA cable.\nToday I went to Fry\u0026rsquo;s to get some cables for another project and finally decided to grab a USB Bluetooth dongle. I picked up a Sabrent BT-UB40 as it claims to have Linux support.\nThe device was immediately recognized and supported in the Unity UI after plugging it in. It also supported pairing to my Sony speaker. However, when trying to \u0026ldquo;connect\u0026rdquo; the following messages were dumped to syslog:\nNov 5 14:02:49 machina bluetoothd[26700]: Failed to obtain handles for \u0026quot;Service Changed\u0026quot; characteristic Nov 5 14:02:49 machina bluetoothd[26700]: Not enough free handles to register service Nov 5 14:02:49 machina bluetoothd[26700]: Error adding Link Loss service Nov 5 14:02:49 machina bluetoothd[26700]: Not enough free handles to register service Nov 5 14:02:49 machina bluetoothd[26700]: message repeated 2 times: [ Not enough free handles to register service] Nov 5 14:02:49 machina bluetoothd[26700]: Current Time Service could not be registered Nov 5 14:02:49 machina bluetoothd[26700]: gatt-time-server: Input/output error (5) Nov 5 14:02:49 machina bluetoothd[26700]: Not enough free handles to register service Nov 5 14:02:49 machina bluetoothd[26700]: Not enough free handles to register service Nov 5 14:02:49 machina bluetoothd[26700]: Sap driver initialization failed.\nAfter a bunch of googling and looking at logs, installing the following package and then disconnecting and re-pairing the device makes it usable: apt-get install pulseaudio-module-bluetooth\nProof: Linux on the desktop has progressed significantly over the past 10 years in terms of UI to manage hardware, yet some polish is still needed to make things completely JFW out of the box.\n","permalink":"http://localhost:1313/2017/11/05/fixing-bluetooth-audio-in-ubuntu-xenial/","summary":"\u003cp\u003eI have a \u003ca href=\"https://www.amazon.com/Sony-SRSX5-Portable-Bluetooth-Speakerphone/dp/B00I053ICY?th=1\"\u003eSony bluetooth speaker\u003c/a\u003e I usually use with iPhone and Macbooks. I\u0026rsquo;ve wanted to use it with my Ubuntu Xenial (4.4.0-93-generic) desktop for a long time but never got around to getting a bluetooth dongle or an RCA cable.\u003c/p\u003e\n\u003cp\u003eToday I went to Fry\u0026rsquo;s to get some cables for another project and finally decided to grab a USB Bluetooth dongle. I picked up a \u003ca href=\"https://www.sabrent.com/product/BT-UB40/usb-bluetooth-4-0-micro-adapter-pc-v4-0-class-2-low-energy-technology\"\u003eSabrent BT-UB40\u003c/a\u003e as it claims to have Linux support.\u003c/p\u003e","title":"Fixing Bluetooth audio in Ubuntu Xenial"},{"content":"I have a Win7 qemu VM passed a gtx 750 and a keyboard+mouse, and the following is a rough guide, inspired from other similar guides which didn\u0026rsquo;t quite work for me or weren\u0026rsquo;t informative enough.\nBackground: I\u0026rsquo;m running 64bit Debian Jessie with Qemu/kvm from stock apt. I\u0026rsquo;m not using libvirt for, as the older version in Debian\u0026rsquo;s apt does not support -cpu kvm=off among other things. This is a file server/VM host that I choose to use headless, and now it also functions as a very capable gaming rig thanks to virtualization.\nroot@machina:~# qemu-system-x86_64 -version QEMU emulator version 2.1.2 (Debian 1:2.1+dfsg-12), Copyright (c) 2003-2008 Fabrice Bellard root@machina:~# Hardware (bought from Fry\u0026rsquo;s in Sunnyvale, CA):\nintel i5-4950 (newegg) Gigabyte GeForce GTX 750 2GB (gigabyte) Gigabyte z97m-ds3h (newegg) The most important part of that CPU is that it supports vt-d which is used for hardware pass through to VMs. It\u0026rsquo;s also damn fast which helps for gaming performance.\nI chose that Gigabyte brand of board as ones related to it have been reported working. I have since added the working setup here to that doc.\nAlso, I\u0026rsquo;m using nvidia driver version 335.23 (the oldest that supports this card) as apparently that is the last to require -cpu kvm=off. I have not tried newer drivers as this one works very well, and if it ain\u0026rsquo;t broke slow don\u0026rsquo;t fix it.\nI\u0026rsquo;m using win7 as the tablet-UI in later Windows releases suck ass and I had an iso lying around already.\nStep 1: Get a supported kernel and tweak grub boot options Step 1.1: Because I am using intel integrated graphics, I\u0026rsquo;m using 3.18.0 with the i915 patches (google). I should be using a newer kernel instead as this one is hella old, but it works and this box isn\u0026rsquo;t internet facing. I recommend compiling the kernel on an SSD.\nI should include a guide for this later for those who aren\u0026rsquo;t used to compiling kernels.\nStep 1.2: Set the following in /etc/default/grub. Enable i915 patch and intel_iommu.\nGRUB_CMDLINE_LINUX=\u0026#34;intel_iommu=on i915.enable_hd_vgaarb=1\u0026#34; Run these:\nsudo update-grub Step 2: Device finding Make sure your iommu groups are correct.\nThese are the devices I\u0026rsquo;m passing. Note the preceding numbers and the vendor:device id pairs:\njoe@machina:~$ lspci -vnn | grep -i nvidia 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107 [GeForce GTX 750] [10de:1381] (rev a2) (prog-if 00 [VGA controller]) 01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbc] (rev a1) joe@machina:~$ Also my separate keyboard/mouse. (A usb apple keyboard from ~2001 + a cheap usb mouse + cheap usb sound card + wired xbox 360 controller)\njoe@machina:~$ lsusb .. Bus 003 Device 009: ID 093a:2510 Pixart Imaging, Inc. Optical Mouse Bus 003 Device 006: ID 05ac:0204 Apple, Inc. Bus 003 Device 011: ID 0d8c:000c C-Media Electronics, Inc. Audio Adapter Bus 003 Device 010: ID 045e:028e Microsoft Corp. Xbox360 Controller ... Step 3: Networking Configure qemu networking. I have my eth0 bridged to br0 (that is a separate article), and the following conf needs to exist to pass br0 to the VM. I am using qemu\u0026rsquo;s bridge helper rather than creating the tap devices manually. br0 corresponds to the bridge name in both snippets below.\nqemu bridge helper (this file won\u0026rsquo;t exist by default): root@machina:~# cat /etc/qemu/bridge.conf allow br0 root@machina:~# network bridging conf There are lots of ways of doing this, but this is mine. If you choose to go this route, install bridge-utils with apt.\nroot@machina:~# cat /etc/network/interfaces auto lo iface lo inet loopback iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 root@machina:~# Step 4: VM Start Script Script I run. Interesting parts that will change for your setup are in bold.\n#!/bin/bash vfiobind() { dev=\u0026#34;$1\u0026#34; vendor=$(cat /sys/bus/pci/devices/$dev/vendor) device=$(cat /sys/bus/pci/devices/$dev/device) if [ -e /sys/bus/pci/devices/$dev/driver ]; then echo $dev \u0026gt; /sys/bus/pci/devices/$dev/driver/unbind fi echo $vendor $device \u0026gt; /sys/bus/pci/drivers/vfio-pci/new_id } modprobe vfio-pci for line in 0000:01:00.0 0000:01:00.1; do vfiobind $line done sudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host,kvm=off \\ -smp 4,sockets=1,cores=4,threads=1 \\ -bios /usr/share/seabios/bios.bin -vga none \\ -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \\ -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \\ -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 \\ -drive file=/dev/vg-ssd/vm-win7,id=disk,format=raw,if=virtio \\ -drive file=/dev/vg-ssd/vm-win7-slice2,format=raw,if=virtio \\ -usb -usbdevice host:093a:2510 -usbdevice host:05ac:0204 \\ -usb -usbdevice host:045e:028e \\ -usb -usbdevice host:0d8c:000c \\ -boot menu=on \\ -netdev tap,helper=/usr/lib/qemu/qemu-bridge-helper,id=hn0 -device virtio-net-pci,netdev=hn0,id=nic1 \\ -vnc :2 It is highly irresponsible of me to run this as root, but it\u0026rsquo;s easy. If the guest manages privilege escalation due to a bug in kvm/qemu (like the floppy driver one months back), your box might get rooted and/or fucked. Keep your qemu packages updated! It might help that I\u0026rsquo;m using Debian stable rather than arch, as qemu might only be updated for security patches rather than new features which could break this. Without -vnc :2, this command requires a working GUI with SDL to run which sucks as I run this in screen from ssh. I also like being able to connect via VNC. This is not at all a boot menu or a gui interface to the VM, but rather a qemu debug command prompt. I recommend restricting access to this (port 5902) using iptables. kvm=off stops qemu from advertising the fact that it is running KVM to the guest. This is needed for newer nvidia drivers as nvidia refuses to work if it thinks its a VM -cpu host exposes all of the host CPUs to the VM nearly verbatim. From what I\u0026rsquo;ve read, this is the best option for performance. I\u0026rsquo;m giving it 4GB of ram (I have 16GB on the host) which is apparently plenty. The path to the seabios binary changes slightly per debian qemu release ( dpkg -L seabios | grep bios.bin) I store my VM as LVM LVs/slices on an SSD. I hear that passing raw block devices (eg /dev/sd$X) to VMs doesn\u0026rsquo;t fare well, and I like being able to carve up the SSDs into other block devices for other VMs/etc. There\u0026rsquo;s also a possibility that using an LVM LV slightly avoids filesystem overhead you\u0026rsquo;d get if you were using a sparse file on ext4/etc. -boot menu=on probably does not need to be there but I like the verbosity it gives during the boot process That vfio bind function has been copy/pasted around various other articles. The modprobe stanza might not be needed. During my install, I used the following. After I installed all of the virtio drivers, I stopped using the realtek nic and ide disks:\nsudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host,kvm=off \\ -smp 4,sockets=1,cores=4,threads=1 \\ -bios /usr/share/seabios/bios.bin -vga none \\ -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \\ -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \\ -device vfio-pci,host=01:00.1,bus=root.1,addr=00.1 \\ -drive file=/dev/vg-ssd/vm-win7,id=disk,format=raw -device ide-hd,bus=ide.0,drive=disk \\ -drive file=/dev/vg-ssd/vm-win7-slice2,id=disk2,format=raw -device ide-hd,bus=ide.1,drive=disk2 \\ -usb -usbdevice host:093a:2510 -usbdevice host:05ac:0204 \\ -boot menu=on \\ -netdev tap,helper=/usr/lib/qemu/qemu-bridge-helper,id=hn0 -device rtl8139,netdev=hn0,id=nic1 \\ -drive file=/srv/ssd/misc/vm/win7.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocd Appendix 1: screenshots! My mac mini connected as a steam in-house streaming client to the win7 VM: link Windows experience index: link Speccy/device manager (seeing a geforce card listed with the VirtIO drivers is mad trippy: link VirtIO drivers make the network cards think they\u0026rsquo;re 10gig: link htop on host showing the qemu procs: link Appendix 2: links There are some very good docs on this subject, and the steps I am using here is shamelessy ripped from them, albeit tweaked a little. Highlights/credits:\nAlex\u0026rsquo;s Wiki which should be treated as source of truth for the subject: http://vfio.blogspot.com/ Google spreadsheet containing list of supported/unsupported hardware: link Forum thread with useful info: link ","permalink":"http://localhost:1313/2015/07/21/nvidia-gtx-750-kvm-vga-pass-through/","summary":"\u003cp\u003eI have a Win7 qemu VM passed a gtx 750 and a keyboard+mouse, and the following is a rough guide, inspired from other similar guides which didn\u0026rsquo;t quite work for me or weren\u0026rsquo;t informative enough.\u003c/p\u003e\n\u003ch4 id=\"background\"\u003eBackground:\u003c/h4\u003e\n\u003cp\u003eI\u0026rsquo;m running 64bit Debian Jessie with Qemu/kvm from stock apt. I\u0026rsquo;m not using libvirt for, as the older version in Debian\u0026rsquo;s apt does not support -cpu kvm=off among other things. This is a file server/VM host that I choose to use headless, and now it also functions as a very capable gaming rig thanks to virtualization.\u003c/p\u003e","title":"Win7 KVM VGA Passthrough (gtx 750)"},{"content":"My last iPhone was in 2009, and I switched after around a year when I got sick of AT@T, which used to be the sole carrier for iPhones. Since then I used and loved the Motorola Droid and its Motorola successors. Yesterday, I took the plunge and got an iPhone 5S. Despite the iPhone 6 coming out next month, the feeling of nostalgia was too overbearing to make me want to wait.\nBeing a long term Linux on the desktop user, I jumped on the apple band wagon for computers around a year ago. Switched my primary desktop from Debian Unstable/sid on home-built hardware to a Mac Mini running OS X. Later on, bought a macbook air. My day job provides me with a macbook pro and a RHEL desktop, which I choose to use headless via ssh/mosh. I\u0026rsquo;ve taken to the \u0026ldquo;Linux/BSD on servers and OS X on desktops\u0026rdquo; paradigm.\nI don\u0026rsquo;t have any major complaints with OS X, mainly because most of the open source apps I used on Linux work just as well (Firefox, Thunderbird, Filezilla, Chrome, others) and my favorite CLI apps either come bundled (vim, screen, bash, ssh) or are easily installable with brew or compilable from source. You get the ability to use the best of open source, as well as apps which don\u0026rsquo;t run on Linux such as MS Office and the Adobe suite, without the headache of tweaking Wine. Setting up netatalk/AFP is also nicer and more integrated than using plain NFS to share files from my Debian NAS.\nHowever, there are several complaints and gripes I have about iOS, and I felt like making a blog post to list them:\nThings I don\u0026rsquo;t like about iOS: - Can\u0026rsquo;t play OGG/FLAC. My music collection is a hodgepodge of mp3/wma/flac/m4a/ogg/flac, all of which Nightingale (and Rhythmbox) play without issue. That Python script I wrote to convert a ton of music files to low quality MP3 may finally come in handy.\n- Can\u0026rsquo;t save non-picture files to the device, such as PDFs or tarballs or anything else you may want to occasionally save.\n- Can\u0026rsquo;t browse filesystem. No external storage like an SD card or similar.\n- No SwiftKey alternative. I hear that a SwiftKey iOS port is in development, and I\u0026rsquo;m looking forward to it.\n- No app privilege limitation or ability to see what functionality of your phone is given to apps. I don\u0026rsquo;t particularly care that much though.\n- Can\u0026rsquo;t transfer files/pictures via Bluetooth. I had gotten accustomed to taking a picture and using the bluetooth file transfer app in OS X to get them off my phone. Emailing pictures I take to myself is a bit of an inconvenience.\n- Airdrop can\u0026rsquo;t copy files between OS X and iPhones. I had guessed there would be some form of nice integration between the two platforms, aside from iTunes. Luckily FaceTime is immune to this limitation.\n- No 4chan browser apps in appstore. Mimi for android is fantastic but apparently apple kicked off all equivalent apps years ago. It\u0026rsquo;s a little tempting to make a 4chan browser in Swift and see my luck for getting it added.\n- I don\u0026rsquo;t think you can install arbitrary apps in same way you can on android by copying over a .APK package and accepting the security warnings.\n- No floating chat heads in Facebook Messenger. I assume this is due to less functionality being given to apps.\n- NSA is probably watching everything I do, but this con likely applies to Android devices as well.\nThe pros: - Finger print unlock. I didn\u0026rsquo;t know it came with this so it was a bit of a pleasant surprise when the setup wizard prompted for my thumb print. Makes waking it from sleep and authorizing purchases very convenient.\n- Higher quality apps. The apps that have their Android equivalents are more polished. I attribute this more to app devs feeling that there may be more iPhone users than android users, or that they\u0026rsquo;re just more likely to pay for apps. Examples: Uber, Yelp, Facebook, Wayze, Kindle, others.\n- The device (iPhone 5S) itself is beautifully made. Metal case + glass screen. The two android models I\u0026rsquo;ve had were just plastic, and I feel that\u0026rsquo;s how most of them are. This is also a con as it\u0026rsquo;s more likely to crack and break whereas I put my droids through hell without their screens getting cracked.\n- Lightning connector is better than micro USB. Akin to the new power connectors for macbooks, it doesn\u0026rsquo;t have a \u0026ldquo;right side up\u0026rdquo; way of connecting.\n- Camera/photo app offers cropping and adding filters to pictures. I imagine that newer android versions have this built in but I haven\u0026rsquo;t looked.\n- FaceTime is awesome.\n- GoogleHangout app provides good enough access and integration to gchat, and the ability to add google accounts to the phone\u0026rsquo;s internal account system provides easy access to my google contacts.\nConclusion The restrictions and missing features are likely all \u0026ldquo;by design.\u0026rdquo; It\u0026rsquo;d really suck if Apple applied this approach to the same extent to OS X.\nI\u0026rsquo;m likely going to use the iPhone 5S for a year or so and then go back to an android device made by Motorola.\n","permalink":"http://localhost:1313/2014/08/18/moving-from-android-to-iphoneios/","summary":"\u003cp\u003eMy last iPhone was in 2009, and I switched after around a year when I got sick of AT@T, which used to be the sole carrier for iPhones. Since then I used and loved the Motorola Droid and its \u003ca href=\"http://en.wikipedia.org/wiki/Droid_Razr_M\"\u003eMotorola successors\u003c/a\u003e. Yesterday, I took the plunge and got an iPhone 5S. Despite the iPhone 6 coming out next month, the feeling of nostalgia was too overbearing to make me want to wait.\u003c/p\u003e","title":"Moving from Android to iPhone/iOS"},{"content":"Say you have a large SVN repo with 617 commits. You want to physically delete the last 6 so you\u0026rsquo;re back to r611. You do not want the data contained in these revisions to exist so svn revert is not appropriate.\nThe most elegant way of killing off r612-r617 is to make a SVN dump up until revision r611 and then restore from it.\nDump the server-side SVN folder and then move it aside:\nsvnadmin dump myrepo/ -r r1:r611 \u0026gt; myrepo.dump mv myrepo myrepo.old\nRecreate a fresh repo and import the dump svnadmin create myrepo svnadmin load myrepo \u0026lt; myrepo.dump\nYou\u0026rsquo;re done.\n","permalink":"http://localhost:1313/2014/04/26/deleting-svn-revisions/","summary":"\u003cp\u003eSay you have a large SVN repo with 617 commits. You want to physically delete the last 6 so you\u0026rsquo;re back to r611. You do not want the data contained in these revisions to exist so \u003ccode\u003esvn revert\u003c/code\u003e is not appropriate.\u003c/p\u003e\n\u003cp\u003eThe most elegant way of killing off r612-r617 is to make a SVN dump up until revision r611 and then restore from it.\u003c/p\u003e\n\u003cp\u003eDump the server-side SVN folder and then move it aside:\u003c/p\u003e","title":"Deleting SVN Revisions"},{"content":"I am a long term user of screen+irssi, a quite common way of using IRC for unix-inclined people neckbeards. One problem with this approach is that you will not be notified of events such as nick highlighting and PMs outside of your terminal window.\nA quick hack is to use the fnotify irssi script to write highlights to a text file, and then a quick shell one liner to continually read events (lines) from this file and alert you via the gui. This post assumes basic knowledge of irssi, which I\u0026rsquo;m not going to cover here.\nInstall fnotify:\nwget -P ~/.irssi/ https://raw.github.com/rndstr/fnotify/master/fnotify.pl\nRun it inside irssi: /run fnotify [00:16] ~~~Irssi: Loaded script fnotify\nRun this on your desktop and let it run indefinitely, and you\u0026rsquo;ll enjoy being notified of important events instead of finding them after the fact:\ntail -n0 -f .irssi/fnotify | xargs -I{} xmessage \u0026quot;{}\u0026quot;\nProtip: the xmessage command is a really ugly X11 built-in command and is primarily suited to minimal window managers such as fluxbox. Its notification popups will look quite out of place in a full blown desktop such as Gnome or KDE; using a different command such as notify-send or similar may be more appropriate.\n","permalink":"http://localhost:1313/2014/01/09/irssi-desktop-notifications-with-fnotify-and-xmessage/","summary":"\u003cp\u003eI am a long term user of screen+irssi, a quite common way of using IRC for unix-inclined people neckbeards. One problem with this approach is that you will not be notified of events such as nick highlighting and PMs outside of your terminal window.\u003c/p\u003e\n\u003cp\u003eA quick hack is to use the fnotify irssi script to write highlights to a text file, and then a quick shell one liner to continually read events (lines) from this file and alert you via the gui. This post assumes basic knowledge of irssi, which I\u0026rsquo;m not going to cover here.\u003c/p\u003e","title":"desktop notifications for irssi nick highlights"},{"content":"There are at least two ways of getting the progress of the dd command. One is sending the dd command the -USR1 kill signal, which will cause it to print out its current progress to stderr:\nkill -USR1 `pidof dd`\nThe other way is to examine the fdinfo file (either 0 or 1) for the dd process under /proc to see how much data has currently been copied. This is more efficient and way faster than sending dd a signal as it\u0026rsquo;s pulling directly from /proc and instead of waiting for dd to catch the signal.\nroot@debian:~# printf '%0.3fGB\\n' $(bc -l \u0026lt;\u0026lt;\u0026lt; \u0026quot;$(awk '{if ($1 ~ \u0026quot;pos\u0026quot;) print $2}' /proc/`pidof dd`/fdinfo/0) / 1073741824\u0026quot;) 15.310GB root@debian:~#\nRun it in a loop for stats every few seconds:\nwhile :; do clear; printf '%0.3fGB\\n' $(bc -l \u0026lt;\u0026lt;\u0026lt; \u0026quot;$(awk '{if ($1 ~ \u0026quot;pos\u0026quot;) print $2}' /proc/`pidof dd`/fdinfo/0) / 1073741824\u0026quot;); sleep 2; done\n","permalink":"http://localhost:1313/2013/04/30/realtime-stats-of-dd/","summary":"\u003cp\u003eThere are at least two ways of getting the progress of the \u003ccode\u003edd\u003c/code\u003e command. One is sending the \u003ccode\u003edd\u003c/code\u003e command the -USR1 kill signal, which will cause it to print out its current progress to stderr:\u003c/p\u003e\n\u003cp\u003ekill -USR1 `pidof dd`\u003c/p\u003e\n\u003cp\u003eThe other way is to examine the fdinfo file (either 0 or 1) for the dd process under /proc to see how much data has currently been copied. This is more efficient and way faster than sending dd a signal as it\u0026rsquo;s pulling directly from /proc and instead of waiting for \u003ccode\u003edd\u003c/code\u003e to catch the signal.\u003c/p\u003e","title":"Realtime stats of dd"},{"content":"Every now and then, you might want to create an ext4 filesystem on a block device spanning several terabytes, and this is almost always a really long process, taking up to several hours or even days.\nThere is a little known trick that can significantly reduce the amount of time needed to create ext4 filesystems:\nmkfs.ext4 -E lazy_itable_init=1\nThe lazy_itable_init flag is default on newer versions of e2fsprogs, and the above snippet works on systems as old as Centos5.\n","permalink":"http://localhost:1313/2013/04/01/drastically-increase-mkfs-ext4-speed/","summary":"\u003cp\u003eEvery now and then, you might want to create an ext4 filesystem on a block device spanning several terabytes, and this is almost always a really long process, taking up to several hours or even days.\u003c/p\u003e\n\u003cp\u003eThere is a little known trick that can significantly reduce the amount of time needed to create ext4 filesystems:\u003c/p\u003e\n\u003cp\u003e\u003ccode\u003emkfs.ext4 -E lazy_itable_init=1\u003c/code\u003e\u003c/p\u003e\n\u003cp\u003eThe lazy_itable_init flag is default on newer versions of e2fsprogs, and the above snippet works on systems as old as Centos5.\u003c/p\u003e","title":"Drastically increase mkfs.ext4 speed"},{"content":"Quick tip this time.\nOften enough, one is logged in as root and decides to su - to an underprivileged user. Due to the tty for the root shell being owned by the user root, the su\u0026rsquo;d environment is unable to run screen:\nroot@whitegirl:~# su - joe joe@whitegirl:~$ screen Cannot open your terminal '/dev/pts/0' - please check. joe@whitegirl:~$ This is resolved by setting the owner of the terminal device to the target user before running su, so the user then has write privileges on the pseudo teletype device:\nroot@whitegirl:~# chown joe `tty` root@whitegirl:~# su - joe joe@whitegirl:~$ screen And then revert it when done\n[screen is terminating] joe@whitegirl:~$ logout root@whitegirl:~# chown root `tty` root@whitegirl:~# ","permalink":"http://localhost:1313/2013/03/08/gnu-screen-cannot-open-terminal-when-you-su-to-a-user/","summary":"\u003cp\u003eQuick tip this time.\u003c/p\u003e\n\u003cp\u003eOften enough, one is logged in as root and decides to su - to an underprivileged user. Due to the tty for the root shell being owned by the user root, the su\u0026rsquo;d environment is unable to run screen:\u003c/p\u003e\n\u003cp\u003e\u003ccode\u003eroot@whitegirl:~# su - joe joe@whitegirl:~$ screen Cannot open your terminal '/dev/pts/0' - please check. joe@whitegirl:~$ \u003c/code\u003e\u003c/p\u003e\n\u003cp\u003eThis is resolved by setting the owner of the terminal device to the target user before running su, so the user then has write privileges on the pseudo teletype device:\u003c/p\u003e","title":"su+screen: \"Cannot open your terminal '/dev/pts/0' - please check.\""},{"content":"Want to boot a (possibly minimal) installation of Debian off the network using a read-only NFS share as the root filesystem, such that each netbooted machine has / mounted read-only over NFS and all writes are done to memory? Read on!\nThis assumes you are using a Linux computer as your router, which will be running Debian and hosting the local version of Debian we will be serving to clients which are PXE booting. This could be seen as a second part of my tutorial on making a Debian box a router , as it assumes your local network is still 10.0.0.0/24 and the dhcp/nfs/tftp server\u0026rsquo;s IP is 10.0.0.1\nFirst off, we\u0026rsquo;ll need deboostrap, nfs, tftpd, and syslinux. Install them:\napt-get install tftp-hpa nfs-kernel-server debootstrap syslinux We will store our initrd and boot loader under /srv/tftp and our NFS root filesystem under /srv/nfsroot\nmkdir -p /srv/tftp /srv/nfsroot Our nfsroot needs to be mountable via NFS. Export it read-only to our local network by putting the following in /etc/exports\n/srv/nfsroot 10.0.0.0/24(ro,no_root_squash,no_subtree_check) We will be booting to a custom Debian install. Install it in /srv/nfsroot using Debootstrap:\ndebootstrap stable /srv/nfsroot http://ftp.us.debian.org/debian Now we need to install some packages in the NFS installation of Debian:\nchroot /srv/nfsroot apt-get update chroot /srv/nfsroot apt-get install initramfs-tools linux-image-2.6.32-5-amd64 Configure its initramfs to generate NFS-booting initrd\u0026rsquo;s\nsed \u0026#39;s/BOOT=local/BOOT=nfs/\u0026#39; -i /srv/nfsroot/etc/initramfs-tools/initramfs.conf We\u0026rsquo;ll need the aufs module\necho aufs \u0026gt;\u0026gt; /srv/nfsroot/etc/initramfs-tools/modules Create the file /srv/nfsroot/etc/initramfs-tools/scripts/init-bottom/aufs give it executable permissions and fill it with the following\nmodprobe aufs mkdir /ro /rw /aufs mount -t tmpfs tmpfs /rw -o noatime,mode=0755 mount --move $rootmnt /ro mount -t aufs aufs /aufs -o noatime,dirs=/rw:/ro=ro mkdir -p /aufs/rw /aufs/ro mount --move /ro /aufs/ro mount --move /rw /aufs/rw mount --move /aufs /root exit 0 Generate initrd\nupdate-initramfs -k Copy generated initrd, kernel image, and pxe bootloader to tftp root and create folder for pxe config\ncp /srv/nfsroot/boot/initrd.img-2.6.32-5-amd64 /srv/tftp/ cp /srv/nfsroot/boot/vmlinuz-2.6.32-5-amd64 /srv/tftp/ cp /usr/lib/syslinux/pxelinux.0 /srv/tftp mkdir /srv/tftp/pxelinux.cfg Configure boot loader. Put the following into /srv/tftp/pxelinux.cfg/default\ndefault Debian prompt 1 timeout 10 label Debian kernel vmlinuz-2.6.32-5-amd64 append ro initrd=initrd.img-2.6.32-5-amd64 root=/dev/nfs ip=dhcp nfsroot=10.0.0.1:/srv/nfsroot Configure tftp\u0026rsquo;s /etc/default/tftpd-hpa\nTFTP_USERNAME=\u0026#34;tftp\u0026#34; TFTP_DIRECTORY=\u0026#34;/srv/tftp\u0026#34; TFTP_ADDRESS=\u0026#34;0.0.0.0:69\u0026#34; TFTP_OPTIONS=\u0026#34;--secure\u0026#34; Add these lines to your dhcp config file /etc/dhcp/dhcpd.conf\nnext-server 10.0.0.1; allow bootp; allow booting; Restart some services:\n/etc/init.d/isc-dhcp-server restart /etc/init.d/tftpd-hpa restart exportfs -ra At this point, configuration is done and you should be good to go. You might want to reset the root password on the nfs debian install:\nchroot /srv/nfsroot passwd root ","permalink":"http://localhost:1313/2012/06/19/diskless-debian-linux-booting-via-dhcppxenfstftp/","summary":"\u003cp\u003eWant to boot a (possibly minimal) installation of Debian off the network using a read-only NFS share as the root filesystem, such that each netbooted machine has / mounted read-only over NFS and all writes are done to memory? Read on!\u003c/p\u003e\n\u003cp\u003eThis assumes you are using a Linux computer as your router, which will be running Debian and hosting the local version of Debian we will be serving to clients which are PXE booting. This could be seen as a second part of my \u003ca href=\"/2012/06/19/linux-as-a-router-with-iptables-bind9-and-dhcpd/\" title=\"Linux as a router with iptables, bind9, and dhcpd\"\u003etutorial on making a Debian box a router\u003c/a\u003e , as it assumes your local network is still 10.0.0.0/24 and the dhcp/nfs/tftp server\u0026rsquo;s IP is 10.0.0.1\u003c/p\u003e","title":"Diskless Debian Linux booting via dhcp/pxe/nfs/tftp/aufs"},{"content":"There are some benefits to using a Linux box as a router. You get full access to the power of iptables, can host stuff directly on the box itself rather than having forwarding ports to other machines on your network, can torrent with way more peers as the box will support more connections than a usual home router, use the router itself as a fileserver/seedbox, etc.\nThe network setup this entails is as follows: [Modem] - [Linux box/router] - [switch] - [other machines on your network]\nFor the box itself you will need two network interfaces, one for your modem and one for your switch. Throughout this tutorial, we will be referring to the one connected to your modem as eth0 and the one connected to your switch as eth1.\nAdditionally, the network range I will be using for your local network will be 10.0.0.0/24\nThis tutorial is intended for Debian/Ubuntu but porting it to CentOS is trivial.\nStep 0 - Configure network interfaces Debian uses /etc/network/interfaces for assigning IP addresses and so on to its network interfaces. You can use the following and tweak it to your needs.\n# Loopback interface. Omitting this will cause weird problems auto lo iface lo inet loopback # The interface connected to the modem. This implies you do not # have a static IP address from your ISP. If you do, you can # use the same notation eth1 uses below, with the addition of a # gateway clause auto eth0 iface eth0 inet dhcp # Interface bound to local network. auto eth1 iface eth1 inet static address 10.0.0.1 netmask 255.255.255.0 Step 1 - Install packages We will need dhcpd to provide DHCP to our local network and bind9 to provide DNS lookups\napt-get install isc-dhcp-server bind9 Step 2 - Configure dhcpd As mentioned earlier, we\u0026rsquo;ll be using 10.0.0.0/24 as our IP range. Additionally, we\u0026rsquo;ll use 10.0.0.1 for the IP of our router on the local network.\nThe configuration file for dhcpd is /etc/dhcp/dhcpd.conf. You can configure it as follows for our purposes:\ndefault-lease-time 600; max-lease-time 7200; subnet 10.0.0.0 netmask 255.255.255.0 { range 10.0.0.100 10.0.0.200; option domain-name-servers 10.0.0.1; option routers 10.0.0.1; } This will hand out IP addresses between 10.0.0.100 and 10.0.0.200 for your local network. When/if they run out, old addresses will be reused.\nStep 3 - Configure bind9 to provide DNS for your network Debian uses /etc/bind for its bind9 named configuration files. The one we care about in this case is /etc/bind/named.conf.options\nAt some point the file will contain the directive allow-recursion, inside the options block. The act of allowing a DNS server to provide DNS for domains other than ones it hosts is referred to as recursion, as it is recursively contacting other DNS servers to carry out the client\u0026rsquo;s request. Allow recursion for your local network as follows:\nallow-recursion { 10.0.0.0/24; }; Step 4 - Allow packet forwarding in the kernel Make sure the following two lines are either present or not commented in /etc/sysctl.conf\nnet.ipv4.conf.all.forwarding=1 net.ipv4.conf.default.forwarding=1 Then reload sysctl:\n# sysctl -p Step 5 - iptables packet fowarding/masquerading We need to have iptables route packets from eth0 to eth1. For this we will use an init script. Create this file: /etc/init.d/iptables\n#!/bin/bash ### BEGIN INIT INFO # Provides: iptablesrules # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: # Description: ### END INIT INFO iptables -F iptables -t nat -F iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -A FORWARD -i eth1 -s 10.0.0.0/255.255.255.0 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE The most important lines are the last two. The first is accepting all packets that forward from eth1 (the local network) and the second masquerades them out eth0 (the internet).\nThat big comment at the top is to avoid warnings from Debian\u0026rsquo;s new dependency boot system.\nNow enable the script:\n# update-rc.d iptables enable Step 6 - Restart services (or reboot) /etc/init.d/iptables /etc/init.d/bind9 restart /etc/init.d/isc-dhcp-server restart Conclusion At this point, you\u0026rsquo;re essentially done. Restart the services and your machines on your local network will start receiving IP addresses and be able to connect to the internet, faster than if you were using a normal consumer-grade router.\nRead on if you\u0026rsquo;d like more functionality.\nAppendix 0 - Port forwarding As tempting as hosting all the services you want on your router may be, you will invariably want to forward ports to machines behind your router. Simply add this line to the iptables init script we made:\niptables -t nat -I PREROUTING -p tcp --dport 2080 -i eth0 -j DNAT --to 10.0.0.169:80 This will forward all requests coming from eth0 (the internet) on port TCP 2080 to port 80 on machine 10.0.0.169. If you need to use UDP rather than TCP, replace tcp with udp in that command.\nAppendix 1 - Static IP\u0026rsquo;s on the local network Having all of the computers on your network get a random dhcp address can be inconvenient if you want to export NFS shares to a single machine, among other reasons. DHCP can assign IP addresses based on MAC addresses. You can add lines such as the following to the dhcpd.conf file we referred to earlier:\nhost adore { hardware ethernet f4:6d:04:44:11:fc; fixed-address 10.0.0.40; } What you provide for the hostname can be anything you feel like making up, really. Make sure that the IP you give it does not overlap the range you are having dhcpd provide.\nAppendix 2 - Well, what if I want WiFi? A nice use for your old Linksis wifi router would be to use it as a hotspot. Simply log in to its admin interface, disable its built-in DHCP server, configure its WiFi settings as you\u0026rsquo;d prefer, and plug one of its switch ports into your switch which is connected to your Linux router. Leave the Linksys\u0026rsquo; WAN port unplugged.\nAt this point it will essentially serve as a wireless \u0026ldquo;switch\u0026rdquo; of sorts. So you\u0026rsquo;ll have all the benefits of using a computer running Linux as a router, and still have WiFi for your place using the old Linksys as a hotspot.\nAnother way of providing WiFi connectivity is adding a wireless card to your Linux router. Unfortunately, that isn\u0026rsquo;t something I\u0026rsquo;ve felt like dealing with yet, so I\u0026rsquo;m not going to write an article on it.\n","permalink":"http://localhost:1313/2012/06/19/linux-as-a-router-with-iptables-bind9-and-dhcpd/","summary":"\u003cp\u003eThere are some benefits to using a Linux box as a router. You get full access to the power of iptables, can host stuff directly on the box itself rather than having forwarding ports to other machines on your network, can torrent with way more peers as the box will support more connections than a usual home router, use the router itself as a fileserver/seedbox, etc.\u003c/p\u003e\n\u003cp\u003eThe network setup this entails is as follows: [Modem] - [Linux box/router] - [switch] - [other machines on your network]\u003c/p\u003e","title":"Linux as a router with iptables, bind9, and dhcpd"},{"content":"So you want to install PHP\u0026rsquo;s gtk extension. Compared to GTK\u0026rsquo;s bindings for Perl and Python, PHP\u0026rsquo;s apparently is under-maintained and is a pain to install as the developers have not accommodated changes in libtool. We will need to install various development packages, temporarily tweak libtool, and then attempt compiling PHP-GTK and enabling it, provided that didn\u0026rsquo;t fail.\nThis tutorial was performed on vanilla 64-bit Debian Squeeze 6.0.2 successfully. Something similar will hopefully work for Ubuntu and other Debian derivatives.\nFirst off, become root and install these packages. We\u0026rsquo;ll be snagging the latest version via subversion and compiling it. We need pear to install cairo, which apparently is a dependency of compiling php-gtk\nAssuming you use sudo like myself, you\u0026rsquo;ll use sudo -i to drop yourself to a root shell. If you have the root account enabled, either log in as root directly or use su - as a normal user to become root.\napt-get install build-essential php5-cli php5-dev libgtk2.0-dev libglade2-dev subversion php-pear Now get cairo via pecl\npecl install cairo-beta Unfortunately we temporarily need to hack our libtool configuration. Don\u0026rsquo;t worry; we\u0026rsquo;ll restore the old version later. As we\u0026rsquo;re running as root, we do not need to tweak the file permissions at all here, as other tutorials seem to mention.\ncd /usr/share/aclocal cp libtool.m4 libtool.m4.bak cat lt~obsolete.m4 ltoptions.m4 ltsugar.m4 ltversion.m4 \u0026gt;\u0026gt; libtool.m4 cd Check it out, run buildconf, compile, etc. This is the most important part. If buildconf or configure do not finish successfully, undo the above change (move on to my next step) and reply demanding me to update this tutorial. Please give as much information regarding your distro\u0026rsquo;s setup and the steps you have taken as possible.\nsvn co http://svn.php.net/repository/gtk/php-gtk/trunk php-gtk cd php-gtk ./buildconf ./configure make make install cd Now undo that ugly libtool hack we used:\nmv /usr/share/aclocal/libtool.m4.bak /usr/share/aclocal/libtool.m4 Enable cairo and php-gtk extensions by adding the necessary lines to the php.ini for command-line. We are not creating .ini\u0026rsquo;s for each .so in the /etc/php5/conf.d/ folder (as php extensions normally do on Debian) because then php5-cgi will attempt loading them and web pages will generate error 500\u0026rsquo;s as they try to connect to X-Windows\necho 'extension=php_gtk2.so' \u0026gt;\u0026gt; /etc/php5/cli/php.ini echo 'extension=cairo.so' \u0026gt;\u0026gt; /etc/php5/cli/php.ini Ensure that gtk is properly installed. We shouldn\u0026rsquo;t need to worry about cairo being broken since pecl shouldn\u0026rsquo;t have failed.\nphp -m | grep gtk This should give something such as the following if all goes well. If it does not list php-gtk, we have failed.\nroot@adore:~# php -m | grep gtk php-gtk root@adore:~# To further verify everything is working correctly, you should probably test a sample GTK hello world as described here: http://gtk.php.net/manual/en/tutorials.helloworld.php or just run the php-gtk software you sought this tutorial for anyway. This is unnecessary if you are installing php-gtk just to get a tool such as the Phoronix Test Suite to work.\nYou should be done! Comments are much appreciated!\nUninstalling php-gtk Uninstall by killing the shared library/header files and then removing the line from the cli php.ini. (Due to your shell\u0026rsquo;s noclobber possibly being enabled, we are first outputting a gtk-less php.ini to a separate file and then moving it on top of the one including gtk)\nrm -rf /usr/lib/php5/20090626/php_gtk2.so /usr/include/php5/ext/php_gtk2/ grep -v gtk /etc/php5/cli/php.ini \u0026gt; orig_php.ini mv orig_php.ini /etc/php5/cli/php.ini ","permalink":"http://localhost:1313/2011/09/15/installing-php-gtk-on-debian/","summary":"\u003cp\u003eSo you want to install PHP\u0026rsquo;s gtk extension. Compared to GTK\u0026rsquo;s bindings for Perl and Python, PHP\u0026rsquo;s apparently is under-maintained and is a pain to install as the developers have not accommodated changes in \u003ccode\u003elibtool\u003c/code\u003e. We will need to install various development packages, temporarily tweak \u003ccode\u003elibtool\u003c/code\u003e, and then attempt compiling \u003ccode\u003ePHP-GTK\u003c/code\u003e and enabling it, provided that didn\u0026rsquo;t fail.\u003c/p\u003e\n\u003cp\u003eThis tutorial was performed on vanilla 64-bit Debian Squeeze 6.0.2 successfully. Something similar will hopefully work for Ubuntu and other Debian derivatives.\u003c/p\u003e","title":"Installing php-gtk on Debian"},{"content":"I encountered a dpkg related error a little while ago while upgrading packages on my Ubuntu Lucid server. I couldn\u0026rsquo;t find a fix on the internet and spent a little while investigating the cause. You can see from the command output that dpkg failed to properly install the Linux kernel package:\nroot@aeroplane:~# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: linux-image-2.6.32-33-generic 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0B/31.6MB of archives. After this operation, 0B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 178303 files and directories currently installed.) Preparing to replace linux-image-2.6.32-33-generic 2.6.32-33.70 (using .../linux-image-2.6.32-33-generic_2.6.32-33.71_i386.deb) ... Done. Unpacking replacement linux-image-2.6.32-33-generic ... dpkg-deb: subprocess paste killed by signal (Broken pipe) dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-33-generic_2.6.32-33.71_i386.deb (--unpack): short read in buffer_copy (backend dpkg-deb during `./lib/modules/2.6.32-33-generic/kernel/drivers/ata/sata_mv.ko') No apport report written because the error message indicates a dpkg I/O error Running postrm hook script /usr/sbin/update-grub. Generating grub.cfg ... Found initrd image: /boot/initrd.img-2.6.32-33-generic Found linux image: /boot/vmlinuz-2.6.32-32-generic Found initrd image: /boot/initrd.img-2.6.32-32-generic Found linux image: /boot/vmlinuz-2.6.32-31-generic Found initrd image: /boot/initrd.img-2.6.32-31-generic Found linux image: /boot/vmlinuz-2.6.32-30-generic Found initrd image: /boot/initrd.img-2.6.32-30-generic Found linux image: /boot/vmlinuz-2.6.32-29-generic Found initrd image: /boot/initrd.img-2.6.32-29-generic Found linux image: /boot/vmlinuz-2.6.32-28-generic Found initrd image: /boot/initrd.img-2.6.32-28-generic Found linux image: /boot/vmlinuz-2.6.32-27-generic Found initrd image: /boot/initrd.img-2.6.32-27-generic Found linux image: /boot/vmlinuz-2.6.32-26-generic Found initrd image: /boot/initrd.img-2.6.32-26-generic Found linux image: /boot/vmlinuz-2.6.32-25-generic Found initrd image: /boot/initrd.img-2.6.32-25-generic Found linux image: /boot/vmlinuz-2.6.32-24-generic Found initrd image: /boot/initrd.img-2.6.32-24-generic Found linux image: /boot/vmlinuz-2.6.32-21-generic Found initrd image: /boot/initrd.img-2.6.32-21-generic Found memtest86+ image: /boot/memtest86+.bin done Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-33-generic_2.6.32-33.71_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@aeroplane:~#\nSomehow the package archive file /var/cache/apt/archives/linux-image-2.6.32-33-generic_2.6.32-33.71_i386.deb became corrupt. Cleaning the local cache of deb packages and then upgrading again fixed the issue:\nroot@aeroplane:~# apt-get clean root@aeroplane:~# apt-get dist-upgrade\n","permalink":"http://localhost:1313/2011/08/18/fixing-a-dpkg-io-error/","summary":"\u003cp\u003eI encountered a \u003ccode\u003edpkg\u003c/code\u003e related error a little while ago while upgrading packages on my Ubuntu Lucid server. I couldn\u0026rsquo;t find a fix on the internet and spent a little while investigating the cause. You can see from the command output that \u003ccode\u003edpkg\u003c/code\u003e failed to properly install the Linux kernel package:\u003c/p\u003e\n\u003cp\u003e\u003ccode\u003eroot@aeroplane:~# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: linux-image-2.6.32-33-generic 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0B/31.6MB of archives. After this operation, 0B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 178303 files and directories currently installed.) Preparing to replace linux-image-2.6.32-33-generic 2.6.32-33.70 (using .../linux-image-2.6.32-33-generic_2.6.32-33.71_i386.deb) ... Done. Unpacking replacement linux-image-2.6.32-33-generic ... dpkg-deb: subprocess paste killed by signal (Broken pipe) dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-33-generic_2.6.32-33.71_i386.deb (--unpack): short read in buffer_copy (backend dpkg-deb during `./lib/modules/2.6.32-33-generic/kernel/drivers/ata/sata_mv.ko') No apport report written because the error message indicates a dpkg I/O error Running postrm hook script /usr/sbin/update-grub. Generating grub.cfg ... Found initrd image: /boot/initrd.img-2.6.32-33-generic Found linux image: /boot/vmlinuz-2.6.32-32-generic Found initrd image: /boot/initrd.img-2.6.32-32-generic Found linux image: /boot/vmlinuz-2.6.32-31-generic Found initrd image: /boot/initrd.img-2.6.32-31-generic Found linux image: /boot/vmlinuz-2.6.32-30-generic Found initrd image: /boot/initrd.img-2.6.32-30-generic Found linux image: /boot/vmlinuz-2.6.32-29-generic Found initrd image: /boot/initrd.img-2.6.32-29-generic Found linux image: /boot/vmlinuz-2.6.32-28-generic Found initrd image: /boot/initrd.img-2.6.32-28-generic Found linux image: /boot/vmlinuz-2.6.32-27-generic Found initrd image: /boot/initrd.img-2.6.32-27-generic Found linux image: /boot/vmlinuz-2.6.32-26-generic Found initrd image: /boot/initrd.img-2.6.32-26-generic Found linux image: /boot/vmlinuz-2.6.32-25-generic Found initrd image: /boot/initrd.img-2.6.32-25-generic Found linux image: /boot/vmlinuz-2.6.32-24-generic Found initrd image: /boot/initrd.img-2.6.32-24-generic Found linux image: /boot/vmlinuz-2.6.32-21-generic Found initrd image: /boot/initrd.img-2.6.32-21-generic Found memtest86+ image: /boot/memtest86+.bin done Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-33-generic_2.6.32-33.71_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@aeroplane:~#\u003c/code\u003e\u003c/p\u003e","title":"Fixing a dpkg io error"},{"content":"This is not a generic suPHP tutorial as there are many, many of them already; it is merely an attempt to debunk commonly preached misinformation regarding suPHP with cold, hard facts.\nsuPHP Also works with Lighttpd suPHP does not just consist of the Apache module mod_suphp; it also consists of a setuid root binary (located at /usr/local/sbin/suphp on FreeBSD; /usr/lib/suphp/suphp on recent Ubuntu releases) which does the actual work. mod_suphp is just an interface to this binary. The binary also works with lighttpd provided you use a configuration file in lighttpd such as the following: server.modules += ( \u0026quot;mod_setenv\u0026quot; ) # Load Env server.modules += ( \u0026quot;mod_cgi\u0026quot; ) # Load CGI $HTTP[\u0026quot;url\u0026quot;] =~ \u0026quot;.php|\\/$\u0026quot; { # match .php files setenv.add-environment = ( \u0026quot;SUPHP_HANDLER\u0026quot; =\u0026gt; \u0026quot;application/x-httpd-php\u0026quot; ) } $HTTP[\u0026quot;url\u0026quot;] =~ \u0026quot;.pl|.py|.cgi$\u0026quot; { # Also handle normal CGI scripts setenv.add-environment = ( \u0026quot;SUPHP_HANDLER\u0026quot; =\u0026gt; \u0026quot;x-suphp-cgi\u0026quot; ) } cgi.assign = ( # Actually use suphp \u0026quot;.pl\u0026quot; =\u0026gt; \u0026quot;/usr/local/sbin/suphp\u0026quot;, \u0026quot;.py\u0026quot; =\u0026gt; \u0026quot;/usr/local/sbin/suphp\u0026quot;, \u0026quot;.cgi\u0026quot; =\u0026gt; \u0026quot;/usr/local/sbin/suphp\u0026quot;, \u0026quot;.php\u0026quot; =\u0026gt; \u0026quot;/usr/local/sbin/suphp\u0026quot; )\nMake sure the path to the suphp binary is correct and the mimetypes match those specified in the main suphp.conf file.\nWith regards to the \u0026ldquo;$HTTP[\u0026ldquo;url\u0026rdquo;] =~ \u0026ldquo;.php|\\/$\u0026rdquo; {\u0026rdquo; line, the \u0026ldquo;|\\/\u0026rdquo; part is so it matches calling folders with index.php inside of them, for example http://domain.com instead of http://domain.com/index.php because lighttpd\u0026rsquo;s $HTTP[\u0026ldquo;url\u0026rdquo;] variable is exactly the URL passed to the sever and not the file that the web server determines you really requested. From what I recall, in a future version of lighttpd the $HTTP[\u0026ldquo;file\u0026rdquo;] variable will exist to point to the requested file so this kludge with specifically matching the directory will no longer be needed.\nsuPHP can also handle CGI scripts and binaries suPHP is a third party-developed program that was partially intended to be a drop in replacement for Apache\u0026rsquo;s built in suEXEC module, as well as being better for PHP than just running it as CGI with suEXEC. It also easily lends itself to supporting Perl/Python/Ruby and other CGI scripts as well as CGI binaries themselves (such as git-httpd-backend). This is done with this excerpt of the suphp.conf file:\nx-suphp-cgi=\u0026quot;execute:!self\u0026quot;\nYou also need these lines in part of your Apache configuration (or the last two blocks at the end of my lighttpd excerpt above):\n\u0026lt;FilesMatch \u0026quot;\\.(pl|py|cgi)$\u0026quot;\u0026gt; SetHandler x-suphp-cgi \u0026lt;/FilesMatch\u0026gt; suPHP_AddHandler x-suphp-cgi\nI use the above FilesMatch method instead of \u0026ldquo;AddType x-suphp-cgi .pl .py .cgi\u0026rdquo; because using AddType appears to give warnings saying it can\u0026rsquo;t find the proper mime type or something similar for \u0026ldquo;x-suphp-cgi\u0026rdquo;. It\u0026rsquo;d be cool if someone could find a work around for that.\nsuPHP can automatically syntax highlight .phps files It can, you just need these lines in your Apache configuration, and make sure the path goes to the normal CLI php binary, rather than the php-cgi binary.\nsuPHP_PHPPath /usr/local/bin/php AddType application/x-httpd-php-source .phps\nYou do not, however, need to use a suPHP_AddHandler line for that mimetype because suPHP apparently autoamtically looks for it and adding a suPHP_AddHandler directive breaks its functionality.\nsuPHP can run scripts as root It can, provided you se these values in the suphp.conf file:\nmin_uid=0 min_gid=0\nThe same file/folder ownership and permissions restrictions apply, except this time for the user `root` itself. eg, keep files at root:root 644 and folders at root:root 755. This can be useful for things such as PHP alternatives to cPanel/WHM, but please be aware that if the script you\u0026rsquo;re designating to run as root has security vulnerabilites that can be exploited, your entire server is at the mercy of the attacker since he/she has root access. However, the same applies if you were using a Perl based server manager if that ran as root as well. The main matter here is to make sure you only run scripts that you 110% trust and keep updated.\nsuPHP compared, speed-wise, to mod_php The difference in speed between suPHP and mod_php is the result of additional foring. With each suPHP page request, the web server forks a call to the suphp binary I mentioned earlier which uses the setuid/setgid system calls to become the user owning the script and then forks again to run the script. mod_php has PHP loaded into Apache itself so with each PHP page request it just runs the script on the fly without needing to fork an additional external process specifically to handle the PHP script.\nThis means that the speed difference is just due to forking. If you have a script that runs extremely slow under suPHP due to MySQL queries or intense number crunching/text parsing, it will likely run just as slow under mod_php since forking is not the bottle neck at that point.\nThis concept applies similarly to using PHP with mod_fastcgi except that fastcgi keeps a number of worker php processes running 24/7 and assigns the current PHP page request to each one randomly to balance the load, rather than having the web server itself run the script directly.\nIt is possible to run suPHP at the same time as mod_php/mod_fastcgi You can, provided you use different MIMETYPES (the application/whatever part related to the Apache handlers) and you can set up each virtual host to use a different one within the VirtualHost settings block. While doing this set up is largely nonsensical, I feel it\u0026rsquo;s better to know that it is indeed doable rather than just blindly assuming it it\u0026rsquo;s impossible.\n","permalink":"http://localhost:1313/2011/08/19/usually-ignored-features-of-suphp/","summary":"\u003cp\u003eThis is not a generic suPHP tutorial as there are many, many of them already; it is merely an attempt to debunk commonly preached misinformation regarding suPHP with cold, hard facts.\u003c/p\u003e\n\u003ch2 id=\"suphp-also-works-with-lighttpd\"\u003esuPHP Also works with Lighttpd\u003c/h2\u003e\n\u003cp\u003esuPHP does not just consist of the Apache module mod_suphp; it also consists of a setuid root binary (located at /usr/local/sbin/suphp on FreeBSD; /usr/lib/suphp/suphp on recent Ubuntu releases) which does the actual work. mod_suphp is just an interface to this binary. The binary also works with lighttpd provided you use a configuration file in lighttpd such as the following:\n\u003ccode\u003eserver.modules += ( \u0026quot;mod_setenv\u0026quot; ) # Load Env server.modules += ( \u0026quot;mod_cgi\u0026quot; ) # Load CGI $HTTP[\u0026quot;url\u0026quot;] =~ \u0026quot;.php|\\/$\u0026quot; { # match .php files setenv.add-environment = ( \u0026quot;SUPHP_HANDLER\u0026quot; =\u0026gt; \u0026quot;application/x-httpd-php\u0026quot; ) } $HTTP[\u0026quot;url\u0026quot;] =~ \u0026quot;.pl|.py|.cgi$\u0026quot; { # Also handle normal CGI scripts setenv.add-environment = ( \u0026quot;SUPHP_HANDLER\u0026quot; =\u0026gt; \u0026quot;x-suphp-cgi\u0026quot; ) } cgi.assign = ( # Actually use suphp \u0026quot;.pl\u0026quot; =\u0026gt; \u0026quot;/usr/local/sbin/suphp\u0026quot;, \u0026quot;.py\u0026quot; =\u0026gt; \u0026quot;/usr/local/sbin/suphp\u0026quot;, \u0026quot;.cgi\u0026quot; =\u0026gt; \u0026quot;/usr/local/sbin/suphp\u0026quot;, \u0026quot;.php\u0026quot; =\u0026gt; \u0026quot;/usr/local/sbin/suphp\u0026quot; )\u003c/code\u003e\u003c/p\u003e","title":"Usually ignored features of suPHP"},{"content":"Joe Gillotti\ne-mail: joe@u13.net Aim: jrgp2040 irc: jrgp in #soldat.devs on QuakeNet ","permalink":"http://localhost:1313/contact/","summary":"\u003cp\u003eJoe Gillotti\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003ee-mail: \u003ca href=\"mailto:joe@u13.net\"\u003ejoe@u13.net\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eAim: jrgp2040\u003c/li\u003e\n\u003cli\u003eirc: jrgp in #soldat.devs on QuakeNet\u003c/li\u003e\n\u003c/ul\u003e","title":"Contact"}]