## Query eBay purchase history

eBay is great except that they don’t want to show you how much you have spent over a period of time. Maybe this is a strategic feature, I don’t know and I don’t care. I just need it!

Lately I’ve started working on some projects that requires cheap parts (in order to keep the project cost low). eBay is happening to be a good market for those Chinese cheap parts (let’s face it: almost everything is made in China, nowadays), so I began to trade on eBay quite a lot. The platform is great but some important feature is still (intentionally) missing: the summary of your purchases.

This can be accomplished quite easy if you save the HTML page on disk and then parse its content with the help of some data extraction utility (eg. awk).

My solution is using this approach, so I have two scripts for this:

Now all I have to do is to save the eBay purchase history to a file on local disk and then call the ebay_total script that will list all items and the total summary (expressed in original currency and also converted to EUR/my local currency):

ebay_total -f /tmp/My_eBay_Purchase_History.htm

The output of the command above will look like:

----------------------------------------------------------------------
Nr    Date        Shipped date    Price    Item description
----------------------------------------------------------------------
1    04/09/13    04/10/13    $2.75 35g Soldering Solder Paste Flux Cream Welding Paste NT 2 04/08/13 04/08/13$1.59    1m Power Supply Multimeter Alligator Testing Cord Lead Clip to Banana Plug cable
3    04/08/13    04/09/13    $1.56 2 Pcs Fuse Holder FH043 10A 250V for 5x20mm Fuse NEW ... 26 03/16/13 03/18/13$1.05    2.5FT 2.0MM Desoldering Braid Solder Remover Wick Cable
----------------------------------------------------------------------
TOTAL: 26 items => 54.38 USD (~ 41.57 EUR ; 182.18 RON)

Note: one can use a software to connect to his/her eBay account and to watch biddings/purchase history. I don’t like this approach, especially when we are talking about an eBay account!

## Finding which Linux module is needed for your hardware

Your Linux distro or even the stock Linux kernel comes with all thinkable drivers for all those hardware that they supported. Most of them are preselected by default (base on hardware architecture) to cover a wide palette of hardware.

However, if you are going to compile a Linux kernel only for your system, you should select/include only those modules that have something to do with your system. By cutting down the default kernel configuration you will save time, memory and will make your system faster.

How could you find out what drivers are need it by your system?If you have already a functional system or if you can boot a Live Linux CD then all you have to do is to look into the Linux system filesystem (sysfs) and to determine which device is using what driver.

For instance, to determine what driver is my network card using, first I need to know the network interface (eq. eth0, wlan0, etc) I want to find what where does the symlink /sys/class/net/<net-if>/device/driver/module point to.

In my case if I run the command ls against the path specified earlier I get the relative path to the driver module for my network card (where <net-if>=wlan1):

user@rpi-gentoo ~ $ls -l /sys/class/net/<net-if>/device/driver/module /sys/class/net/wlan1/device/driver/module -> ../../../../module/ath5k As you can see the driver needed by this device is called ath5k. To setup the kernel to include this driver all I have to do is to search a configuration parameter named CONFIG_%ATH5K% (where % means one or more characters). You should not be surprised if you will find more than one parameter with the pattern shown above because there could hundreds devices from that vendor. For example, the following configuration parameters contain the pattern show earlier: CONFIG_ATH5K CONFIG_ATH5K_AHB CONFIG_ATH5K_DEBUG CONFIG_ATH5K_PCI If you want to understand what means any of those drivers I suggest you to look them up on Linux Hardware Database (lhwdb.org). The query string for any kernel configuration parameter is: http://lhwdb.org/cfg/parameter Note: the CONFIG_ prefix in the example above is stripped, so if the kernel parameter is named CONFIG_parameter you should use only the parameter part in the lhwdb.org query (or just go to lhwdb.org and use the damn search box ). Ok, so you’ve found out what is the driver name for you network card, right? But there could be at least 10-20 different drivers and finally, just to determine the kernel configuration name you have to search the content of at least 1600 different kernel Makefiles. It could be done automatically, though: #!/bin/bash TMP=$(mktemp)
FLIST=$(mktemp) find -type f -name Makefile|sort -u >$FLIST
for f in $(find /sys/ -type l -name driver 2>/dev/null) ; do test -e$f/module && basename readlink $f/module done|sort -u >$TMP

for m in $(cat$TMP);do
echo -e "\e[33;1;31m[$m]\e[0m" for f in$(cat $FLIST);do grep -io "(CONFIG_.*$m.*)" $f done|sort -u done|sed -e 's/^.*(//g;s/).*$//g'

rm -f $TMP$FLIST


The output of the code above run on my system looks like this (but it could look different on your system):

[ahci]
CONFIG_EXYNOS4_DEV_AHCI
CONFIG_IMX_HAVE_PLATFORM_AHCI
CONFIG_SATA_ACARD_AHCI
CONFIG_SATA_AHCI
CONFIG_SATA_AHCI_PLATFORM
[ath5k]
CONFIG_ATH5K
CONFIG_ATH5K_AHB
CONFIG_ATH5K_DEBUG
CONFIG_ATH5K_PCI
[coretemp]
CONFIG_SENSORS_CORETEMP
[drm]
CONFIG_DRM
CONFIG_DRM_AST
CONFIG_DRM_CIRRUS_QEMU
CONFIG_DRM_EXYNOS
...
CONFIG_DRM_I915
...
[ehci_hcd]
CONFIG_USB_EHCI_HCD
CONFIG_USB_EHCI_HCD_PLATFORM
[snd_hda_intel]
CONFIG_SND_HDA_INTEL
[uhci_hcd]
CONFIG_USB_UHCI_HCD
[usbcore]
[usbhid]

I searched on lhwdb.org and I’ve determined what represents each of these drivers and I’ve decided that those written with bold are the ones I need.

To include these and only these drivers on my Linux kernel, all I have to do is to step in each class of “Device drivers” (the lhwdb.org tells you even the kernel menus where to find these configuration parameters) and to select only those parameters that match with those shown by the output of the script above.

Final note: The method shown above it’s just the tip of the iceberg since it covers the process of finding the core drivers for the system. In order to determine each tiny driver used by each tiny device you might have, I would recommend reading the “Linux kernel in a Nutshell” by Greg Kroah-Hartman, which presents more techniques regarding this topic.

Posted in kernel, linux | Tagged , , | 1 Comment

## Linux commands easily overlooked

Unlike other operating systems, GNU/Linux comes with a enriched application tool-chain and thus, some of them are easily overlooked by the average Linux user.

While learning about “how to create a Linux system from scratch” I’ve found that, just if we are limiting to the base components we’ll find few dozens of useful tools that either we haven’t known that they exist or worst, we forgot them on the way.

I’ve created a list of those commands, grouped by their container, as a reference for the future use/practice:

• Util-linux
• cfdisk (like fdsik but using nice ncurses menus)
• column – Formats a given file into multiple columns
• findmnt – Is a command line interface to the libmount library for work with mountinfo, fstab and mtab files
• lsblk – Lists information about all or selected block devices in a tree-like format
• lscpu – Prints CPU architecture information
• pg – like more/less but displays a text file one screen full at a time
• script – Makes a typescript of a terminal session
• scriptreplay – Plays back typescripts using timing information
• sfdisk – like parted, a disk partition table manipulator
• whereis – Reports the location of the binary, source, and man page for the given command
• Psmisc
• pstree – Displays running processes as a tree
• Procps
• pmap – Reports the memory map of the given process
• pwdx – Reports the current working directory of a process
• w – Shows which users are currently logged on, where, and since when
• E2fsprogs
• badblocks – Searches a device (usually a disk partition) for bad blocks
• e2freefrag – Reports free space fragmentation information
• e4defrag – Online defragmenter for ext4 filesystems
• filefrag – Reports on how badly fragmented a particular file might be
• lastlog – Reports the most recent login of all users or of a given user
• pwck – Verifies the integrity of the password files /etc/passwd and /etc/shadow
• pwunconv – Updates /etc/passwd from /etc/shadow and then deletes the latter
• Coreutils
• nl – Numbers the lines from the given files
• nohup – Runs a command immune to hangups, with its output redirected to a log file
• od – Dumps files in octal and other formats
• Iana-etc
• /etc/protocols – Describes the various DARPA Internet protocols that are available from the TCP/IP subsystem
• /etc/services – Provides a mapping between friendly textual names for internet services, and their underlying assigned port numbers and protocol types
• Inetutils
• rcp – Performs remote file copy
• rexec – executes commands on a remote host
• rsh – Runs a remote shell
• talk – Is used to chat with another user
• Gawk
• pwcat – Dumps the password database /etc/passwd
• IPRoute
• lnstat – provides Linux network statistics. It is a generalized and more feature-complete replacement for the old rtstat program
• nstat – Shows network statistics
• routel – A component of ip route. This is for listing the routing tables
• rtstat – Route status utility
• ss – Similar to the netstat command; shows active connections
• Man-DB
• apropos – Searches the whatis database and displays the short descriptions of system commands that contain a given string
• whatis – Searches the whatis database and displays the short descriptions of system commands that contain the given keyword as a separate word
• Sysvinit
• last – Shows which users last logged in (and out), searching back through the /var/log/wtmp file; it also shows system boots, shutdowns, and run-level changes
• lastb – Shows the failed login attempts, as logged in /var/log/btmp
Posted in linux | | 2 Comments

## Create your own GNU/Linux system. From scratch!

Although it might sound odd, creating your own distro from scratch makes sense in certain situations:

• you want to create a small self-contained (eventually read-only) system that delivers only those components that you want
• it might be, for instance, a rescue disk, a diagnosis toolkit, whatever
• you want to create the base of a system that you are going to deploy on your entire organization and, of course, to control and to maintain it by yourself; you want also that every component being installed to be tuned for that particular hardware and for that particular user usage.
• you design an embedded system and you need a basic OS to help you to control the hardware.
• you only want to have some fun and, being wired like me, you haven’t found anything more exciting than spending few hours/days building step by step the entire GNU/Linux system.

To get a grasp of this idea it would worth mentioning that you could, with little effort, create a self-contained system just enough to run the Apache web server, and this in only 5-8MB of disk space!

When I’m saying “create your own GNU/Linux system” I mean exactly that, a GNU/Linux system and not a system developed from scratch by yourself (including the kernel, system tools, etc). That would not be wired but a tremendous effort which, despite the fact that it will pay off eventually, it would require more than a 100-lines post on my blog.

The whole idea is to start with a working (HOST) system, an empty hard drive (or a raw disk image) and by downloading, compiling and installing some applications/libraries from the Internet on that new disk, to end with a self-contained working GNU/Linux system assembled step by step by yourself.

Unlike the other GNU/Linux distributions out there (Gentoo, Sorcerer,etc) the LFS is a type of an online book that will guide you step-by-step how to install a Linux From Scratch (thus LFS). It’s very well detailed (like a install Linux From Scratch for dummies) and the team that maintain this project are willing to help you on their IRC channel. Of course, if you work by the book the chances to fail are minimal.

I did it, it was fun, I’ve ended with a 600M disk image (a SMP x86_64 GNU/Linux tested on a qemu emulator; user=root, pwd=lfs) that contains the base of a working Linux system (no X11, of course). In a nutshell, the steps I’ve done are the the following:

1. create a raw disk image file (dd if=/dev/zero of=image.raw bs=X count=Y)
2. create partition and filesystems for your disk (fdisk/mkfs.ext*)
3. create a filsystem hierarchy based on the standards set for UNIX-like operating systems (/bin,/boot,/dev,/etc,/home,/lib,/media,/opt,/root,/sbin,/tmp,etc).
4. create a compiler toolchain for your new system:
• step1: download/unpack/configure/install the Binutils,GCC,Linux API Headers,GLibC using the HOST GLibC library (dependency by the HOST system)
• step 2: configure/install the Binutils,GCC using the newly create GLibC at step 1 (now it’s an independent/self contained toolchin)
6. download/unpack/configure/test/install the libraries/applications that will be self-contained, independent base of the new system (like Linux API headers,GlibC,Binutils,GCC,Util-linux, E2fsprogs, Shadow, Coreutils, Inetutils, Perl, Autoconf, Automake, IPRoute, SysVInit, Udev, GRUB, etc)
7. create/configure the system bootscripts (like /etc/hosts,/etc/sysconfig,/etc/resolv.conf,/etc/sysconfig/network,/etc/sysconfig/clock,/etc/rc.d/*,etc/init.d/*,/etc/inittab,/etc/fstab,etc)
9. install the GRUB boot loader on the disk, configure the boot menu, reboot the new system

If you would stick to the English localization and if you are willing to discard the Linux documentation/manuals, you could easily lower the disk footprint to 300MB. If you would discard the GCC compiler toolkit and its libraries you could shrink the whole thing to 100MB. If you want something more extreme, like the 5-8MB Apache web server in the example above, you would need more time to test and to shrink every bit of it but the bottom line is that “it’s possible!”.

Posted in distro, linux | Tagged | 2 Comments

## Distributed compilation with distcc (x86 vs R-Pi)

With the todays performance of our computer, the speed of compilation has been improved to the point that distributed building could be slower than local builds (it would take much more to throw your files and their dependency over your network and back than to compile those locally).

However, when working with large projects or when building on a old (or not so powerful) hardware, it would help if the PCs’ brotherhood would give you a hand.

If you have 2+ similar systems (i.e. x86_64 vs. x86_64), the initial setup of distcc would be a breeze since both GCC compilers are for the same architecture and should have the same CC prefix (eg. x86_64-pc-linux-gnu-). This kind of setup is widely spread and explained over the Internet.

The problem comes when you attempt to setup different architectures, such as an ARM (on a BCM2835/BCM2708 hardware) and a x86_64 for instance. Since this was a real experiment I’ve done, in the following I will refer to it as following:

• The ARM will be the system you are compiling for (the HOST)
• the x86_64 will be the system where you distribute the building task (the SLAVE)

First drawback with this setup is that, having (two) different hardware architectures between the HOST and SLAVE(s), you are going to need a cross-compiler on SLAVE(s) capable to produce code for the HOST architecture. Note that the cross-compiler you will install on the SLAVE(s) will have its own CC prefix (eg. armv6-rpi-linux-gnueabi-).

Other drawback I’ve encountered was that on the HOST the CC prefix is armv6j-hardfloat-linux-gnueabi- while on SLAVE it is armv6-rpi-linux-gnueabi-. Every time the HOST will ask the SLAVE(s) to give it a hand with some files, the HOST will instruct the SLAVE(s) which program to use (eg. hardfloat-linux-gnueabi-gcc). As the SLAVE(s) does not have such a program it will fail and furthermore will return to the HOST with an error message like:

distcc ERROR: compile <file> on <SLAVE> failed with exit code 110

In the following I am going to present my setup , step by step. Note that it’s valid only on Gentoo, for other Linux distros the process should be similar but not exactly the same:

1. install the distcc on the HOST
2. configure the distcc on the HOST
3. install the cross-compiler on the SLAVE
4. build & configure the cross-compiler on the SLAVE
5. install the distcc on the SLAVE
6. configure the distcc on the SLAVE
7. if your HOST are on Gentoo Linux, configure the HOST‘s Portage to work with distcc
8. compile the kernel while on R-Pi via distcc (not cross-compile, natively)
9. does it worth it? good question!

### 1. Install the distcc on the HOST

Depending on the Linux distribution one might use, the installation of distcc could be done in several ways. The old way is via download+configure+make+install and it works every time .
However, if you would like to take the advantage of your distribution packaging tool then you should definitely use that, whatever it calls: apt, debian, portage, packman, rpm, etc.
In Gentoo I use Portage and the installation of sys-devel/distcc package supposes to run the following command:

root@rpi-gentoo ~ $emerge distcc ### 2. Configure the distcc on the HOST If you plan to use your HOST as a slave for other systems that want to distribute their building work over the network, you should instruct your HOST system about how to run the distccd service. Edit the file /etc/conf.d/distccd and make sure it contains the following settings (note that these environment variables are well documented in your default configuration file): DISTCC_VERBOSE="0" DISTCCD_OPTS="--user nobody" DISTCC_LOG="/var/log/distcc.log" DISTCCD_EXEC="/usr/bin/distccd" DISTCCD_PIDFILE="/var/run/distccd/distccd.pid" DISTCCD_OPTS="${DISTCCD_OPTS} --port 3632"
DISTCCD_OPTS="${DISTCCD_OPTS} --log-level critical" DISTCCD_OPTS="${DISTCCD_OPTS} --allow <others-ip/netmask>"
DISTCCD_OPTS="${DISTCCD_OPTS} -N 15" We should inform the /usr/bin/distcc application what SLAVE to use when comes to distribute the compilation jobs across the network. For that we have to edit the file /etc/distcc/hosts as following: <SLAVE1-IP> <slave2-IP> ... localhost where SLAVE*-IP is either the ip or the host name for the SLAVE system(s) that will accept our distributed compiling jobs. We put the localhost at the end of the list in order to instruct distcc to distribute first on the network and only then to use the localhost CPU resources. We want that whenever the C compiler is called on HOST, the distcc to “hijack” that call and to distribute that compiling job(s) to the configured distcc hosts. To achieve that we have to make two more adjustments: • remove the original symlinks: /usr/lib/distcc/bin/{c++,cc,g++,gcc} • create a wrapper script • recreate those 4 symlinks to point to our newly created wrapper script • adjust the global PATH variable such that distcc install directory to be the first one when the system is searching for C compilers Run as root user the following commands: DISTCC_BIN=/usr/lib/distcc/bin GCC_WRAPPER=$DISTCC_BIN/gcc -dumpmachine-wrapper
rm $DISTCC_BIN/{c++,cc,g++,gcc} cat << EOT >${GCC_WRAPPER}
#!/bin/bash
exec ${DISTCC_BIN}/gcc -dumpmachine-g${0:$[-2]} "$@"
EOT

for prog in c++ cc g++ gcc;do
ln -s $GCC_WRAPPER$DISTCC_BIN/$prog done Edit the /etc/profile on your HOST system and add replace the “export PATH” line with the one below: export PATH="/usr/lib/distcc/bin:${PATH}"

As you can see the distcc programs would be found first on the PATH in spite of the original gcc which comes later on the path. To activate this path right now, make sure to re-sync the environment global variable by running the following command:

### 4. Build & configure the cross-compiler on the SLAVE

Since ct-ng is only a tool that automates the process of building a cross-compiler toolchain, it means that you don’t have (yet) the cross-compiler needed to help the HOST with the (distributed) compilation tasks.
When building a compiler for a target platform (such as ARM, for instance) you should know what’s your target platform (here HOST). One simple way of finding that would be to look up on your HOST for its CC prefix:

user@x86_64-gentoo ~ $gcc -dumpmachine Let’s suppose that the command above returned armv6j-hardfloat-linux-gnueabi. It means that we should look up for a ct-ng sample as close as possible to this prefix. Now that we have an idea about the target platform we can ask ct-ng to show us its pre-configured list of samples (hopefully we’ll find something similar with ours): user@x86_64-gentoo ~$ ct-ng list-samples

This would produce an output like:

user@x86_64-gentoo ~ $ct-ng list-samples Status Sample name [G.X] alphaev56-unknown-linux-gnu [G.X] alphaev67-unknown-linux-gnu [G.X] arm-bare_newlib_cortex_m3_nommu-eabi [G.X] arm-cortex_a15-linux-gnueabi [G..] arm-cortex_a8-linux-gnueabi [G..] arm-davinci-linux-gnueabi [G..] arm-unknown-eabi [G..] arm-unknown-linux-gnueabi [G.X] arm-unknown-linux-uclibcgnueabi [G..] armeb-unknown-eabi [G.X] armeb-unknown-linux-gnueabi [G.X] armeb-unknown-linux-uclibcgnueabi [G.X] armv6-rpi-linux-gnueabi [G.X] avr32-unknown-none [G..] bfin-unknown-linux-uclibc [G..] i586-geode-linux-uclibc [G.X] i586-mingw32msvc,i686-none-linux-gnu [G.X] i686-nptl-linux-gnu [G.X] i686-unknown-mingw32 [G.X] m68k-unknown-elf [G.X] m68k-unknown-uclinux-uclibc [G.X] mips-ar2315-linux-gnu [G.X] mips-malta-linux-gnu [G..] mips-unknown-elf [G.X] mips-unknown-linux-uclibc [G.X] mips64el-n32-linux-uclibc [G.X] mips64el-n64-linux-uclibc [G..] mipsel-sde-elf [G..] mipsel-unknown-linux-gnu [G..] powerpc-405-linux-gnu [G..] powerpc-860-linux-gnu [G.X] powerpc-e300c3-linux-gnu [G.X] powerpc-e500v2-linux-gnuspe [G..] powerpc-unknown-linux-gnu [G..] powerpc-unknown-linux-uclibc [G..] powerpc-unknown_nofpu-linux-gnu [G.X] powerpc64-unknown-linux-gnu [G.X] s390-ibm-linux-gnu [G.X] s390x-ibm-linux-gnu [G..] sh4-unknown-linux-gnu [G..] x86_64-unknown-linux-gnu [G..] x86_64-unknown-linux-uclibc [G.X] x86_64-unknown-mingw32 L (Local) : sample was found in current directory G (Global) : sample was installed with crosstool-NG X (EXPERIMENTAL): sample may use EXPERIMENTAL features B (BROKEN) : sample is currently broken The sample (with bold) above looks pretty good to me since the HOST and the sample listed above represents the same CPU architecture (armv6) and they both are compatible with the GNU EABI (cool!). If you want to know what library it would install if you would choose that profile, run the following command: user@x86_64-gentoo ~$ ct-ng show-armv6-rpi-linux-gnueabi
[G.X]   armv6-rpi-linux-gnueabi
OS             : linux-3.6.11
Companion libs : gmp-5.0.2 mpfr-3.1.0 ppl-0.11.2 cloog-ppl-0.15.11 mpc-0.9
binutils       : binutils-2.22
C compiler     : gcc-linaro-4.7-2013.01 (C,C++)
C library      : eglibc-2_16 (threads: nptl)
Tools

Once you’ve decided configure the ct-ng to use that sample/profile. It would be configured by default at ${HOME}/x-tools/armv6-rpi-linux-gnueabi. Note that you should run this commands with a regular user (not root!): user@x86_64-gentoo ~$ ct-ng armv6-rpi-linux-gnueabi

If you want to fine-tune this configuration via a ncurses menu (like the one you have when you are configuring the Linux kernel) then run also this command:

user@x86_64-gentoo ~ $ct-ng menuconfig The menu will looks like the one below (that’s why I prefer this tool rather than others, plus that it works!): click to zoom When ready you can start building the cross-compiler toolchain as following (note that it will take a while, like dozens of minutes): user@x86_64-gentoo ~$ ct-ng build

If you want to use this compiler at the HOST level, outside of the distcc scope, then make sure that your shell environment PATH variable gets updated with the path of this new toolkit. Either edit your ~/.profile or the global /etc/profile file and add the following lines, then run either “source ~/.profile” or “source /etc/profile” to update your local/global PATH environment variable:

CROSS_ROOT="${HOME}/x-tools/armv6-rpi-linux-gnueabi" #if you use this in /etc/profile then use absolute path instead$HOME variable
CROSS_PATH="${CROSS_ROOT}/bin:${CROSS_ROOT}/libexec/gcc/armv6-rpi-linux-gnueabi/4.7.3:${CROSS_ROOT}/armv6-rpi-linux-gnueabi/bin" export PATH="${CROSS_PATH}:${PATH}" In order to allow distcc to use this toolkit, you must set the distcc environment PATH variable in the /etc/conf.d/distcc file (check my comments here). Note: replace the 4.7.3 version above with the real one that fits your build. Because the SLAVE cross-compiler CC prefix is different than the one on our HOST (i.e. armv6-rpi-linux-gnueabi vs. armv6j-hardfloat-linux-gnueabi), when the HOST will ask the SLAVE to compile a <file> with the program armv6j-hardfloat-linux-gnueabi-* the SLAVE will fail and will return in exchange the error shown at the beginning (code 110). The solution for this would be to create the following symlinks on the SLAVE: HOST_PREFIX=armv6j-hardfloat-linux-gnueabi SLAVE_PREFIX=armv6-rpi-linux-gnueabi CROSS_BIN=${HOME}/x-tools/armv6-rpi-linux-gnueabi/bin
for prog in c++ cc g++ gcc;do
ln -s $CROSS_BIN/$SLAVE_PREFIX-$prog$CROSS_BIN/$HOST_PREFIX-$prog
done

Now, every time that the HOST will ask the SLAVE to compile a using the armv6j-hardfloat-linux-gnueabi tool, the SLAVE will obey and cheat in the same time, because it will redirect that command to the local installed cross-compiler (eg. armv6-rpi-linux-gnueabi).
If you want to test your new installed ARM cross-compiler just grab a “Hello world” C program and compile it locally like below (hopefully output.o is created, everybody’s happy):

armv6-rpi-linux-gnueabi-gcc -c -o output.o source.c

You should also read the installation/usage instruction on crosstool-ng.org website.

### 5. Install the distcc on the SLAVE

Repeat the same procedure like in the step 1 above.

### 6. Configure the distcc on the SLAVE

Edit the file /etc/conf.d/distccd and make sure it contains the following settings (note that these environment variables are well documented in your default configuration file):

DISTCC_VERBOSE="0"
DISTCCD_OPTS="--user nobody"
DISTCCD_OPTS="$DISTCCD_OPTS} -j X" DISTCC_LOG="/var/log/distcc.log" DISTCCD_EXEC="/usr/bin/distccd" DISTCCD_PIDFILE="/var/run/distccd/distccd.pid" DISTCCD_OPTS="${DISTCCD_OPTS} --port 3632"
DISTCCD_OPTS="${DISTCCD_OPTS} --log-level critical" DISTCCD_OPTS="${DISTCCD_OPTS} --allow <HOST-ip/netmask>"
DISTCCD_OPTS="${DISTCCD_OPTS} -N 15" CROSS_ROOT="/home/<your-user>/x-tools/armv6-rpi-linux-gnueabi" CROSS_PATH="${CROSS_ROOT}/bin:${CROSS_ROOT}/libexec/gcc/armv6-rpi-linux-gnueabi/4.7.3:${CROSS_ROOT}/armv6-rpi-linux-gnueabi/bin"
PATH=$CROSS_PATH:${PATH}

Note: adjust the 4.7.3 version with the real one that fits your build. Replace also the X with the number of cores that your SLAVE can provide. To make sure that the SLAVE is busy while the R-Pi HOST is struggling with its IO bottleneck I set this at least 2-5 times more than the number of cores of the SLAVE system. Make sure that the HOST also will attempt to use all those available sockets (if you run make manually make sure you use the -j switch, too).

Make sure that you restart the distccd service after these changes.

### 7. Configure the HOST Gentoo’s Portage with distcc

Portage knows to take advantage of the distcc tool out of the box. To activate the usage of distcc we should edit the /etc/make.conf (or /etc/portage/make.conf nowadays) and to configure the following options:

FEATURES="${FEATURES} distcc" MAKEOPTS="-jX" DISTCC_HOSTS="SLAVE1 SLAVE2 ... localhost" Note: the X above represents the total number of CPU of your localhost + SLAVE1 + SLAVE2 + … and the SLAVE* represents the ip or the host name for those SLAVE system(s) where the HOST will distribute the compilation jobs. ### 8. Compile the kernel while on R-Pi via distcc Although one might (cross)compile the kernel on a powerful system and later deploy kernel image on the R-Pi SD-CARD, it’s also possible to compile the kernel while running the R-Pi by the R-Pi itself. Using the distcc as described above will not work out of the box because the Linux source Makefile is insisting to work with gcc/g++ and therefore local (HOST) distcc will get the request to distribute the compilation of some kernel file by the remote gcc/g++ installed on the SLAVE (instead of some cross-compiler armv6-* on the SLAVE). Because at the SLAVE level runs your SLAVE architecture gcc (eg.: x86_64-pc-linux-gnu-*), obviously it will compile every source received from the R-Pi distcc using that native compiler, resulting a native SLAVE object binary (that runs only on SLAVE architecture, such as x86_64) instead a binary compiled for ARM architecture that is understandable by a armv6j cpu. Although there might be many other solutions, one that I came up with and which does not alter the previous(1-7) configuration, is to disable temporary the gcc on the SLAVE. On my SLAVE (x86_64) system I just renamed (temporarily) the /usr/x86_64-pc-linux-gnu folder (/usr/x86_64-pc-linux-gnu.old) and after the kernel compilation end I renamed back as it was before. It’s a dirty solution, I know, but till I find a better solution I have to leave with that. Note: when compiling the kernel on R-Pi don’t forget to make use of -jX switch, where X is the number of cores of your SLAVE + 1. Does it worth all this effort? ### Does it worth it? To answer that question I’ve tried to compile the Linux kernel directly from R-Pi (@700MHz, no overclocking), with and without distcc , having the help of an HP workstation with a Core 2 Duo CPU E8400 @ 3.00GHz and 4G RAM (see the [!] below): • make -j10 with distcc (-j10) on (HP) SLAVE: 4080 seconds • make without distcc (only 1 job on R-Pi): 10524 seconds • The HOST /etc/distcc/hosts defines the following: SLAVE-IP/10,lzo –localslots=1 The bottom line: $\frac{4080 s}{10524 s} = 0.39$ i.e. with only one SLAVE did the distcc compilation succeeded 2.6 times faster. If you are adding other SLAVEs to this setup, they will definitely make the process faster. This result can be even improved if you are going to play a little with distcc host specifications -LIMIT option (see distcc manual). [!] The above compilation was done with this kernel .config file. If you choose to use it then remember to remove the CONFIG_CROSS_COMPILE parameter as it was not meant to be used in this project but rather in this one. In another test I’ve used other kernel .config which is even more cut-down than the original one (I’ve eliminated the sound, wireless, USB support for other things than keyboard, mouse, storage, cryptography algorithms not used, NLS pages not used, etc) and I’ve done a similar test as above, as following: • make -j4 with distcc (-j10) on (HP) SLAVE: 3028 seconds • make without distcc (only 1 job on R-Pi): 8035 seconds • The HOST /etc/distcc/hosts defines the following: SLAVE-IP/4 –localslots=1 So basically, if you use your R-Pi only for some experiments (not as gaming/media console) and if you use this simplified kernel configuration, it’s possible to compile the R-Pi kernel from R-Pi itself in 2h (without any extra-help) or less or equal than 50 minutes with help of distcc distributed compilation. ### Final note The process described above does not apply only to ARM vs X86-* architecture. A setup like this should also work when you have an old/slow i386 PC and a powerful X86-* system and you want to distribute the compilation across the network such the i386 get a hand from its network fellows. Working with heterogeneous environments it’s a science, but not an exact science. You have to grab the bull by the horns, sometime you fix a problem after many trials and errors, a hard job but somebody have to do it. It worths reading this blog, too: http://rostedt.homelinux.com/distcc/. Posted in distro, linux, Raspberry Pi | Tagged , , , , | Leave a comment ## Transparently download from Google Drive Google Drive is a file storage and synchronization service provided by Google which enables user cloud storage, file sharing and collaborative editing. The storage limit for the free accounts is, at the time of writing, about 5GB. If you have a blog, like I have, and sometimes you write an article and you want to share a files rather than inserting its contents into the articles page, you could use your free Google account and the Google Drive service to store and share that file. ## The problem The problem that I have encountered regarding this matter was that, when sharing a file you get a public share link, like the one below: https://docs.google.com/file/d/0B95k2kr1bG9feGVhd0p3QkVCSVE/view If you provide this address as your HTML link reference then you will not be prompted by the “Save as” window, instead the Google Drive Viewer application will be shown and this will allow you, of course, to download that file: a file shown in Google Drive Viewer As we can see in the link provided above, it is structured as following: <protocol>://<domain>/<other-dirs>/<file-id>/<action> • the secure connection protocol : https • the (sub)domain name : docs.google.com • other application/directories within the Google website : file/d/ • the file unique identifier : 0B95k2kr1bG9feGVhd0p3QkVCSVE • the action/command to execute about that file: view (or edit, etc) If I would write an article and I wish to insert a link to a shared file (stored on Google Drive) I would not be happy to let my readers/visitors to open the Google Drive Viewer then to put them to click one more time, and so on. What I would like to achieve would be to get directly that Save as windows: the “Save As” window when clicking on a file link Obviously there are (at least) two types of downloads where you really need the direct download link: 1. nice to have: on a link inserted on a HTML page (such as a wordpress.com article) 2. must to have: on a link where you are supposed to download the file programmatically In the first case you could live even without the direct link, maybe the Google Drive Viewer it’s not such a big deal at the end. In the second case, if you don’t have a direct link, you could end downloading a content which is not your file but the Google Drive Viewer html content for the requested file: <!DOCTYPE html><html><head><meta name="google" content="notranslate"> <meta http-equiv="X-UA-Compatible" content="IE=edge;"> <meta name="fragment" content="!"><title>.config.rpi$
c&&0<b&&(c.tick("_tbnd",void 0,window.chrome.csi().startE),
c.tick("tbnd_","_tbnd",b))),null==a&&window.gtbExternal&&(a=window
....

## My solution(s)

Obviously, by trial an error, I found solutions for both cases.

1. For the link you would insert into your web page (such as a wordpress.com article) you could use the following pattern:
Ex: https://docs.google.com/uc?export=download&id=0B95k2kr1bG9feGVhd0p3QkVCSVE
1. For the link where you are supposed to download the file programmatically the pattern above will not work, instead you will get an error message like below:
root@rpi-gentoo ~ $--2013-02-23 10:33:32-- https://docs.google.com/uc?export=download Resolving docs.google.com... 173.194.32.3, 173.194.32.4, 173.194.32.5, ... Connecting to docs.google.com|173.194.32.3|:443... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2013-02-23 10:33:32 ERROR 500: Internal Server Error. I’ve found that the following pattern will work like a charm: Ex: https://googledrive.com/host/0B95k2kr1bG9feGVhd0p3QkVCSVE Note that in the second case (the one above) the file name you will be prompted to save (programmatically or not) will not be the original shared file name (like in the first example) but the file hashed identifier (such as 0B95k2kr1bG9feGVhd0p3QkVCSVE). Since this method provides a direct link (by the file-id) to download the file programmatically (eg: wget -O <output-file> URL), it would not be a challenge for any programmer to choose whatever name he/she likes for the output-file. Posted in Uncategorized | Tagged , , , , | 1 Comment ## Boost Linux kernel compilation for Raspberry Pi I do a lot of experiments with my new RPi nowadays. It’s like in the old days, in the early of ’90s, when I’ve got my HC-91, the Romanian ZX-Spectrum clone. There is one small difference, though: the HC-91 has been delivered with a case an ROM-stored BASIC interpretor. The RPi has no case nor a pre-installed OS, so you have to take care of both. To compile the Linux kernel for Raspberry Pi (i.e. ARM architecture) one should accomplish at least 8 different steps: • fetching the (last) Linux kernel source code (git://github.com/raspberrypi/linux.git) • fetching the (last) RPi firmware (git://github.com/raspberrypi/firmware.git) • configuring the kernel according with the target architecture • compiling the kernel and its modules • copying the kernel image to the boot partition • copying the RPi firmware to the boot partition • (optional) configuring the kernel boot command-line and its start-up configuration • last but not least, copying the RPi firmware utilities to the target /opt/ directory If you have to do often this task then it would be better to automate those steps. Because I did that many times and because I love to create my own scripts that makes my live easier (in the future), I wrote an automation script that accomplish all these steps. This script assumes that you have already a crossdev environment prepared so you can use your cross-arm-compiler right away: Raspberry Pi kernel compilation automation script. Usage : /usr/sbin/rpi-build [options] Options : -w : the working directory (mandatory) -c : CC prefix for the TARGET architecture -f : the path to your predefined Linux kernel .config file -p : the name of the boot partition of your RPi -t : the type of the partition specified by -p parameter -h : print this help message Report bugs to eugenmihailescux at gmail dot com The <working directory> is the place where the Linux kernel and firmware source code will be downloaded, where the build.log will be saved, where the final output directory will be deployed. The <CC prefix> is the full path of the crossdev toolkit that will be used to compile the kernel for the TARGET architecture. For instance, if your cross compiler toolkit is installed at ~/x-tools/armv6-rpi-linux-gnueabi/bin/ and the compiler prefix is something like armv6-rpi-linux-gnueabi, then the CC prefix you specify should be ~/x-tools/armv6-rpi-linux-gnueabi/bin/armv6-rpi-linux-gnueabi- (note the dash at the end of the prefix!). The <.config> is the path to a predefined Linux kernel .config file. You can use a predefined .config Linux kernel configuration file so that you don’t have to configure the kernel every time you run this script. For my RPi model B I have already created a cut-down RPi configuration and can be found here. The <boot partition> is the name of your RPi boot partition on the SD Card. If nothing is supplied then /dev/mmcblk0p2 will be assumed by default. The <partition type> is the type of the <boot partition>. If nothing is supplied then ext4 will be assumed by default. The script can be downloaded from here. Posted in kernel, linux, Raspberry Pi | Tagged , , , | Leave a comment ## The true price of Raspberry Pi. The cheapest setup After getting the long awaited Raspberry Pi board I started to google for the required equipments in order to boot-up the board: The bottom line of this story is that, using the equipments that one might already have it at home + buying some extra but cheap equipments from eBay, the true price of Raspberry Pi does not exceed 50$ (or 38 € or 33 £).

The bord along with a 2G SD card:

click to zoom

The silicon rubber USB keyboard:

click to zoom

The RCA composite video cable:

click to zoom

The mini to micro USB charger adaptor converted:

click to zoom

Last but not least my zero dollar cardboard “pibow replica” case, not so artistic, not so gorgeous like the original, but at least it’s cheap and it’s eco-friendly:

The case seen on the top

The case seen from the front

The case seen from one side

The case seen from the other side

Since the Raspberry Pi project aims to provides a cheap computing solution, I thought that it would be nonsense to spend more on accessories that on the computing unit itself. The solution I’ve found sounds/looks sensible.

## Disk usage in bash

I run several headless systems and sometimes I need to know the disk usage of a particular directory. We have several choices, from the ls command to the ncurses ncdu utility.

For what I need ‘ls’ does not provide support out of the box. The ncdu looks more alike but I don’t need ncurses dependencies or other tools added to my toolbox. I need a 10 line bash script, eventually written by myself (let’s reinvent the wheel, don’t we?)

### My solution

Create a bash script called “du1″ (or whatever you like) with the following content:

#!/bin/bash
#####################################################################
# Script for displaying disk usage
#
# Syntax: du1 FOLDER <FILTER>
#	where FOLDER is the folder to show
#	and FILTER is a float {0..100} that allows you to filter
# 	only those folders which have a usage >= FILTER %
#
# Author        : Eugen Mihailescu
# Last change   : 21.Feb.2013
# E-mail        : eugenmihailescux at gmail dot com
#
# Tested on     : Linux 3.6.10-gentoo-2.1 x86_64 GenuineIntel
#####################################################################

PARENT_DIR=$1 if [ -z$PARENT_DIR ];then
PARENT_DIR=~/
fi

COLS=$((24+${#PARENT_DIR}))
if [ -n "$2" ];then FILTER_SIZE=$2
else
FILTER_SIZE=0
fi

# print what will be shown
s="Show usage of $PARENT_DIR" result=expr$FILTER_SIZE \> 0
if [ "$result" -eq "1" ]; then s="$s, for subfolders usage >= $FILTER_SIZE%" fi echo$s

# get the total size of the parent directory
TOTAL=$(du -sB1$PARENT_DIR 2>/dev/null|cut -f1)

# print each subfolder disk usage
find $PARENT_DIR -mindepth 1 -maxdepth 1 -type d -print0|du -sB1 --files0-from=- 2>/dev/null|sort -gr|awk -v TOTAL=$TOTAL -v fSIZE=$FILTER_SIZE '{bsize=$1;perc=100*bsize/TOTAL;if (perc>=fSIZE){size=bsize; unit="B"; if (size>=1024){size=size/1024;unit="KB";} if (size>=1024){size=size/1024; unit="MB";} if (size>=1024){size=size/1024;unit="GB";} printf("%8.2f %-2s [%-4.1f%%]\t%s\n",size,unit,perc,$2)}}' # print a subtotal line separator printf "%${COLS}s\n" |tr " " "-"

# print the total disk usage
echo $TOTAL$PARENT_DIR|awk -v TOTAL=$TOTAL '{bsize=$1;size=bsize; unit="B"; if (size>=1024){size=size/1024;unit="KB";} if (size>=1024){size=size/1024; unit="MB";} if (size>=1024){size=size/1024;unit="GB";} printf("%8.2f %-2s [%-4.1f%%]\t%s\n",size,unit,100*bsize/TOTAL,\$2)}'


### The result

rpi-gentoo ~ # du1 / 0.1
Show usage of /, for subfolders usage >= 0.1%
1.12 GB [94.7%]    /usr
34.38 MB [2.8 %]    /opt
9.81 MB [0.8 %]    /var
9.07 MB [0.7 %]    /lib
6.34 MB [0.5 %]    /bin
2.88 MB [0.2 %]    /sbin
-------------------------
1.19 GB [100.0%]    /

That’s why Linux can be fun!

## *nix tip of the day: what ‘Linux version’ am I using?

How to tell the Linux distro version you are using?

Since Linux is not exactly the whole ecosystem of computer programs that you are using on your desktop/server but rather the kernel that allocates the physical hardware resources for the computer programs that you are running, it would not be syntactically nor technically correct to name the version of your OS by the version of your OS kernel.

Who is who?

What we call it generically ‘Linux’ is in fact:

• a kernel (that project started by Linus Torvalds back in ’90s and lately supported by a bunch of programmers) – an essential part of OS
• OS programs that run in top of the kernel and which, if are missing, the kernel is just a beautiful but useless piece of furniture.

So the OS is in fact this combination of GNU with Linux added, so called GNU/Linux.

If I run the uname command I got the whole version info, like:

rpi-gentoo ~ # uname -a
Linux rpi-gentoo 3.6.11-cutdown+ #17 PREEMPT Mon Feb 18 14:27:02 CET 2013 armv6l ARMv6-compatible processor rev 7 (v6l) BCM2708 GNU/Linux

Also, the full and the correct name would be GNU/Linux.

In order to tell what ‘Linux version” am I using one should look over the version of its Linux distribution. And that piece of information comes from a file that is located on your /etc folder and which might be called: redhat-release, SuSE-release, debian_version, arch-release, gentoo-release, slackware-version, mandriva-release, etc.

So basically to tell what version is your Linux distribution one should run this command:

rpi-gentoo ~ # cat /etc/*-release /etc/*/-version 2>/dev/null
Gentoo Base System release 2.1