Mercurial > hg > Blog
changeset 98:1d9382b0329b
Specify the syntax on markdown blocks to avoid broken output that has class=err
line wrap: on
line diff
--- a/content/Eclipse/workspacemechanic.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Eclipse/workspacemechanic.md Thu Dec 19 10:04:33 2019 +0100 @@ -8,6 +8,7 @@ A rather undocumented feature of the plugin is how to distribute rules. Just put all rules on a HTTP server. Then, create a json document that references all these rules in the same directory as the rules files. Use this snippet as a template: + :::shell { type : 'com.google.eclipse.mechanic.UriTaskProviderModel', metadata : {
--- a/content/GNUstep/new-zipper-release.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/GNUstep/new-zipper-release.md Thu Dec 19 10:04:33 2019 +0100 @@ -4,10 +4,12 @@ I had to debug Mule's build for the upcoming 1.4.1 release a bit, especially the packaging of the JCA distribution. I usually use Zipper to open archives, that's why I wrote it in the first place. It turns out that some file extensions are ambiguous, e.g. a file ending in .rar could either be a rar archive or a Java Resource ARchive which is effectively a zip file. Since the rar packager cannot handle zip files I picked up an idea I had in mind for a long time now: file types should not be determined by their extensions but the way the unix file command does it: by looking for certain patterns inside of a file. I implemented the most simple cases in Zipper now: Zip files begin with + :::shell { 'P', 'K', 0x003, 0x004 } and rar files begin with + :::shell { 'R', 'a', 'r', '!'} straight from the beginning of the file. When opening a file Zipper asks all registered Archive subclasses for their magic bytes and compares them to the first four bytes of the file. I think I will extend this mechanism a bit in the future so in the end all supported archives will be determined by a file's content and not by its file extension any more.
--- a/content/Java/commons-httpclient-vs-self-signed-certs.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Java/commons-httpclient-vs-self-signed-certs.md Thu Dec 19 10:04:33 2019 +0100 @@ -9,16 +9,19 @@ Let's assume you already have a HttpClient instance at hand: + :::java HttpClient client = new DefaultHttpClient(); Now let's configure all the socket factories and stuff that's required to make HTTPS traffic with self signed certificates work: + :::java TrustStrategy trustStrategy = new TrustSelfSignedStrategy(); X509HostnameVerifier hostnameVerifier = SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER; SchemeSocketFactory socketFactory = new SSLSocketFactory(trustStrategy, hostnameVerifier); And now let's put it all together: + :::java Scheme https = new Scheme("https", 443, socketFactory); SchemeRegistry registry = client.getConnectionManager().getSchemeRegistry(); registry.register(https);
--- a/content/Java/compiling-jdk-with-debug.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Java/compiling-jdk-with-debug.md Thu Dec 19 10:04:33 2019 +0100 @@ -11,15 +11,18 @@ Just list all the available java source files e.g. using the unix find command: + :::shell find . -name *.java -print > java-files.txt Now I used grep to extract only the relevant classes to recompile, e.g. + :::shell grep './javax/security' java-files.txt > security.txt grep './com/sun/security/' java-files.txt >> security.txt Now we can run the compiler using security.txt as an input file: + :::shell CLASSPATH=${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/tools.jar javac -g -J-Xmx512m -cp "$CLASSPATH" @security.txt
--- a/content/Java/log4j-logger-additivity.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Java/log4j-logger-additivity.md Thu Dec 19 10:04:33 2019 +0100 @@ -5,6 +5,7 @@ Sometimes you want to write more than one logfile using logj4. This is possible by defining multiple appenders and specifying an appender for a certain logger like this: + :::shell log4j.appender.A1=org.apache.log4j.ConsoleAppender .... log4j.appender.A2=org.apache.log4j.FileAppender @@ -15,4 +16,5 @@ Unfortunately, all output that goes through the logger foo comes out in both appenders, which may not be what you want. The log4j docs talk about *logger additivity* but don't show concrete examples how to configure it. The trick is to configure the additivity **on the logger** and **not on the appender**. (I always fall into that trap). Simply add the following to the example above to stop messages to logger foo come out on A1: + :::shell log4j.additivity.foo = false
--- a/content/Linux/compiling-shrewsoft-vpn-on-pi.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/compiling-shrewsoft-vpn-on-pi.md Thu Dec 19 10:04:33 2019 +0100 @@ -13,7 +13,8 @@ Sounds like an interesting pet project so I ordered a Pi and some equipment. When it finally arrived I flashed the standard [NOOBS](http://downloads.raspberrypi.org/NOOBS_latest) starter pack. The main hurdle will be getting the Shrew Soft VPN client to run, I don't want to fiddle with the Linux distro right now. That'll be a hobby project for another day. Before attempting to compile the source all prerequisites must be installed: - + + :::shell apt-get install cmake apt-get install flex apt-get install bison @@ -24,10 +25,12 @@ The next step is to download the sources, unpack the tarball and compile the source. This turned out to be quite smooth using + :::shell cmake -DCMAKE_INSTALL_PREFIX=/usr -DETCDIR=/etc -DNATT=YES followed by the typical + :::shell make make install @@ -35,10 +38,12 @@ Now that the VPN client is installed, I exported the VPN settings from my Linux desktop machine and tried to run the command line client - ikec -r vpn + :::shell + ikec -r vpn I should have been warned by the smooth compile. Of course the VPN client does not work out of the box, it crashes with + :::shell *** glibc detected *** ikec: double free or corruption (out): 0x0191fa70 *** Aborted
--- a/content/Linux/debian-fixing-problem-with-defaults-entries.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/debian-fixing-problem-with-defaults-entries.md Thu Dec 19 10:04:33 2019 +0100 @@ -7,6 +7,7 @@ Quite unrelatedly my boss kept nagging me about incoming emails to root that looked like this + :::shell Subject: *** SECURITY information for <host> *** <host> : Sep 29 05:45:42 : user : problem with defaults entries ; TTY=pts/0 ; PWD=/home/user ;
--- a/content/Linux/enigmail-vs-pinentry.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/enigmail-vs-pinentry.md Thu Dec 19 10:04:33 2019 +0100 @@ -10,10 +10,12 @@ So as a hint for anyone who may stuble over the same problem as I did: + :::shell emerge app-crypt/pinentry and dont' forget to enable one of the GUI keywords e.g. `gtk` or `qt`. To make sure that the pinentry link points to the correct binary run + :::shell eselect pinentry list and select the correct variant.
--- a/content/Linux/file-manager.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/file-manager.md Thu Dec 19 10:04:33 2019 +0100 @@ -6,6 +6,7 @@ On my [Gentoo](http://www.gentoo.org/) machine the "Show in System Explorer" menu item did not work on Eclipse. I kept getting this error message: + :::shell Execution of 'dbus-send --print-reply --dest=org.freedesktop.FileManager1 /org/freedesktop/FileManager1 org.freedesktop.FileManager1.ShowItems array:string:"file:/tmp/HelloWorld.java" string:""' failed with return code: 1 A quick search on the net found the [freedesktop File Manager DBus specification](http://www.freedesktop.org/wiki/Specifications/file-manager-interface/). It mentions only Gnome's Nautilus implementing the dbus interface - but I do not use Gnome, I use [XFCE](http://www.xfce.org/). Some more searching finds a [ticket on the XFCE bugtracker](https://bugzilla.xfce.org/show_bug.cgi?id=12414) which confirms that Thunar, the XFCE file manager, does not support the file manager DBus interface yet.
--- a/content/Linux/fq_codel.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/fq_codel.md Thu Dec 19 10:04:33 2019 +0100 @@ -2,8 +2,6 @@ Date: 2013-08-08 Lang: en -![Down with latency!](http://www.imagetxt.com/userpics/funnyimage-72529829.png) - I've been planning on playing around with Linux' traffic shaping for quite some time now. My linux router machine is directly connected to the DSL modem - no consumer grade router box in the mix (and hence none of the vulnerabilities and security nightmares that have been discovered in those kind of boxen in the recent past). My requirements for traffic shaping are quite simple: I share some open source downloads via bittorrent but I don't want the torrent traffic to block regular surfing, Skype voice calls etc. @@ -12,6 +10,7 @@ Since Codel is "no knobs", "just works" all you have to do is to enable CONFIG_NET_SCH_FQ_CODEL in the kernel config and enable codel using + :::shell tc qdisc add dev ppp0 root fq_codel What can I say? I just works. I have turned off throttling on the torrent uploads and did not notice any lag in daily surfing. Skype calls sound like they did before with torrents turned off.
--- a/content/Linux/nfs.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/nfs.md Thu Dec 19 10:04:33 2019 +0100 @@ -8,16 +8,19 @@ In `/etc/sysctl.conf` these settings: + :::shell fs.nfs.nlm_tcpport = 4001 fs.nfs.nlm_udpport = 4001 In `/etc/conf.d/nfs` enable these settings: + :::shell OPTS_RPC_MOUNTD="-p 32767" OPTS_RPC_STATD="-p 32765 -o 32766" Now all NFS daemons should be locked down to specific ports so you can add appropriate shorewall rules: + :::shell ACCEPT loc fw tcp 111 # portmapper ACCEPT loc fw udp 111 ACCEPT loc fw tcp 2049 # rpc.nfsd
--- a/content/Linux/portage-metadata-cache.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/portage-metadata-cache.md Thu Dec 19 10:04:33 2019 +0100 @@ -5,6 +5,7 @@ [Gentoo's](http://www.gentoo.org) portage keeps metadata about installed ebuilds in `/var/cache/edb`. Dependency info for all installed ebuilds is in a `dep` subdirectory which typically looks something like this: + :::shell . ├── usr │ └── portage @@ -25,6 +26,7 @@ So I gave the sqlite metadata cache a try to measure if it really speeds up portage. After configuring the database and rebuilding the metadata cache, `/var/cache/edb/dep` looks a bit different now: + :::shell . └── usr ├── portage
--- a/content/Linux/software-raid-setup.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/software-raid-setup.md Thu Dec 19 10:04:33 2019 +0100 @@ -9,6 +9,7 @@ The first step of the setup is partitioning the drives. The [handbook](https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/Disks) suggests adding a small partition at the beginning of the drive to enable booting from a gpt partitioned drive. The `/boot` partition will be formatted using [ext4](https://en.wikipedia.org/wiki/Ext4) because the filesystem will remain bootable even if one of the drives is missing. The rest of the disk will be formatted using [xfs](https://en.wikipedia.org/wiki/XFS). To recap the layout: + :::shell Number Start End Size File system Name Flags 1 1.00MiB 3.00MiB 2.00MiB grub bios_grub 2 3.00MiB 95.0MiB 92.0MiB boot @@ -18,14 +19,17 @@ Now let's create a RAID 1 for the boot partition: + :::shell mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2 and for the rootfs: + :::shell mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb3 /dev/sdc3 To maintain the RAID device numbering even after reboot, the RAID config has to be saved. This will be done by - + + :::shell mdadm --detail --scan >> /etc/mdadm.conf Then create an ext4 filesystem on `/dev/md0` and an xfs filesystem on `/dev/md1`. Nothing noteworthy here. @@ -34,6 +38,7 @@ After chrooting into the new system some changes have to be made to the [genkernel](https://wiki.gentoo.org/wiki/Genkernel) config in order to produce a RAID enabled initramfs. In `/etc/genkernel.conf` set + :::shell MDADM="yes" MDADM_CONFIG="/etc/mdadm.conf" @@ -41,6 +46,7 @@ While it's compiling, edit `/etc/default/grub` (I'm of course using [grub2](https://www.gnu.org/software/grub/manual/grub.html) for booting) and add + :::shell GRUB_CMDLINE_LINUX="domdadm" Setup grub on both devices individually using `grub-install /dev/sdb` and `grub-install /dev/sdc`.
--- a/content/Linux/sshfs_with_key.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Linux/sshfs_with_key.md Thu Dec 19 10:04:33 2019 +0100 @@ -10,6 +10,7 @@ Long story short: To mount via sshfs using an existing ssh key, use + :::shell sshfs -o IdentityFile=/path/to/the/ssh/private/key host:/dir /mountpoint This approach even works with an ssh agent. Make sure that you get all prompts out of the way (i.e. asking for the key password etc) before mounting.
--- a/content/Maven/cross-jdk-project-files-continued.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Maven/cross-jdk-project-files-continued.md Thu Dec 19 10:04:33 2019 +0100 @@ -6,10 +6,12 @@ Unfortunately there's more Eclipse internals involved when dealing with cross platform issues. It turns out that the correct JRE_CONTAINER for Linux and Windows is + :::shell org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk but that doesn't work for Mac where it needs to be like this: + :::shell org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.launching.macosx.MacOSXType/jdk So for real cross platform project files you need to put the launcher type into a property and override that in a mac specific profile. The final pom will look similar to this
--- a/content/Maven/cross-jdk-project-files.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Maven/cross-jdk-project-files.md Thu Dec 19 10:04:33 2019 +0100 @@ -36,4 +36,5 @@ Then go and define a property in the pom (for the default value). Anyone who uses a different JDK can specify the name to use on the commandline now using + :::shell mvn -Djdk=sun-jdk-1.4.2.14 eclipse:eclipse
--- a/content/Maven/deploying-files.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Maven/deploying-files.md Thu Dec 19 10:04:33 2019 +0100 @@ -18,6 +18,7 @@ The `file` parameter takes the main jar. The `pomFile` paramenter taks the pom file. That's easy. The other files have to be specified using a more convoluted format. Each file name has to specify its classifier and its type, appended to the separate `classifiers` and `types` lists. Finally the `files` list must specify the full file names. A look at the example will make more sense: + :::shell mvn deploy:deploy-file \ -Dmaven.repo.local=/tmp/maven-repo \ -Durl=http://nexus.local/repository/releases \
--- a/content/Maven/skipping-tests.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Maven/skipping-tests.md Thu Dec 19 10:04:33 2019 +0100 @@ -4,18 +4,22 @@ In more complicated Maven builds you might package your tests along with your normal code to use in other modules (see plugin doc to the [maven-jar-plugin](http://maven.apache.org/plugins/maven-jar-plugin/index.html) how to do it). For normal development you might not want to execute the unit tests every time you build. Compiling the source with + ::shell mvn compile won't build the tests which might fail the whole build as other modules depend on the compiled test classes. So you might be tempted to use + ::shell mvn compile test-compile which won't do the job either because the tests won't be compiled. At first glance using + ::shell mvn -Dmaven.test.skip=true test seems to be what you want to do but alas, the tests won't be compiled in this case either. The solution to this problem is to use + ::shell mvn -Dmaven.test.skip.exec=true test which actually enters the test phase (which in turn compiles the tests) but skips only the execution of the unit tests.
--- a/content/Python/installing-packages-in-userspace-on-osx.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/Python/installing-packages-in-userspace-on-osx.md Thu Dec 19 10:04:33 2019 +0100 @@ -8,16 +8,19 @@ Before I can use `pip` it has to be installed. I want to install everything in userspace so first a proper PYTHONPATH has to be set up: + :::shell export PYTHONPATH=/Users/dirk/Python/site-packages Then I can go ahead and use `easy_install` to install `pip`: + :::shell easy_install --install-dir /Users/dirk/Python/site-packages pip Make sure to add `/Users/dirk/Python` to your PATH so that you can call pip without specifying the full path. Now I can start installing packages: + :::shell pip install --target /Users/dirk/Python/site-packages <package> This approach has its limitations, though. Since the developer tools aren't installed packages that require compliation of C code won't work. \ No newline at end of file
--- a/content/SCM/sventon-changeset-links.md Thu Dec 19 09:31:57 2019 +0100 +++ b/content/SCM/sventon-changeset-links.md Thu Dec 19 10:04:33 2019 +0100 @@ -17,5 +17,6 @@ The following rewrite config rewrites the old format to the new one and forwards it to sventon: + :::shell RewriteCond %{QUERY_STRING} name=(.*)&revision=([0-9]+) RewriteRule /sventon/revinfo.svn http://sventon.local/sventon/repos/%1/info?revision=%2 [L]