Create a tunnel to a remote host via a gateway
ssh -L <local-port-to-listen>:<destination-host>:<destination-port> <gateway_user>@<gateway>
This opens a local port listening for traffic on <local-port-to-listen> and forwards that traffic via the gateway (user@gateway) to the remote destination <destination-host>:<destination-port>
-f executes ssh in the background
-N means no remote command (ie: just create a tunnel)
ssh -N -f -L 8080:destination:8080 user@gateway
Pointing your browser to localhost:8080 will connect to the ssh tunnel which forwards the data to destination:8080, going via the gateway
This blog serves as a dumping ground for my own interests. On it you will find anything which I want to keep track of; links, articles, tips and tricks. Mostly it focuses on C++, Javascript and HTML, linux and performance.
Saturday, 22 November 2014
Tuesday, 18 November 2014
Multicast troubleshooting
Troubleshooting multicast:
Check that the interface is configured with multicast:
$ ifconfig eth9.240
eth9.240 Link encap:Ethernet HWaddr 00:60:DD:44:67:9E
inet addr:10.185.131.41 Bcast:10.185.131.63 Mask:255.255.255.224
inet6 addr: fe80::260:ddff:fe44:679e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Check that the multicast addresses you are subscribing to have a route to that particular interface:
$ ip route
224.0.0.0/4 dev eth9.240 scope link
Run your application and check if the subscriptions are going to the correct interface:
$ netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
[...]
eth9.240 1 239.1.127.215
eth9.240 1 239.1.1.215
Run tcpdump and check that you are indeed receiving traffic. Do this while your application is running; otherwise the igmp subscription will not be on.
$ tcpdump -i eth9.240
10:15:13.385228 IP 10.0.8.121.45666 > 239.1.1.1.51001: UDP, length 16
If you got to the tcpdump part, the networking should be OK.
If your application is still not receiving packets, it is probably because of the rp_filter in Linux.
Check that the interface is configured with multicast:
$ ifconfig eth9.240
eth9.240 Link encap:Ethernet HWaddr 00:60:DD:44:67:9E
inet addr:10.185.131.41 Bcast:10.185.131.63 Mask:255.255.255.224
inet6 addr: fe80::260:ddff:fe44:679e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Check that the multicast addresses you are subscribing to have a route to that particular interface:
$ ip route
224.0.0.0/4 dev eth9.240 scope link
Run your application and check if the subscriptions are going to the correct interface:
$ netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
[...]
eth9.240 1 239.1.127.215
eth9.240 1 239.1.1.215
Run tcpdump and check that you are indeed receiving traffic. Do this while your application is running; otherwise the igmp subscription will not be on.
$ tcpdump -i eth9.240
10:15:13.385228 IP 10.0.8.121.45666 > 239.1.1.1.51001: UDP, length 16
If you got to the tcpdump part, the networking should be OK.
If your application is still not receiving packets, it is probably because of the rp_filter in Linux.
The rp_filter filters out any packets that do not have a route on a particular interface. In the example above, if 10.0.8.121 is not routable via eth9.240 so the solution is to:
Check the filter
$ cat /proc/sys/net/ipv4/conf/ethX/rp_filter
add this line to /etc/sysctl.conf
net.ipv4.conf.eth9/240.rp_filter = 0
$ sudo sysctl -p
Check if it’s OK
$ sysctl -a | grep “eth9/240.rp_filter”
Check the filter
$ cat /proc/sys/net/ipv4/conf/ethX/rp_filter
add this line to /etc/sysctl.conf
net.ipv4.conf.eth9/240.rp_filter = 0
$ sudo sysctl -p
Check if it’s OK
$ sysctl -a | grep “eth9/240.rp_filter”
Thursday, 23 October 2014
C++ Correct multi threaded singleton initialisation
Correct double-checked locking pattern
1. Pointer must be atomic
2. Check, lock, check, construct
std::atomic<Foo*> foo { nullptr };
Foo* instance()
{
Foo* f = foo; // single load of foo
if (!f)
{
std::lock_guard<std::mutex> l(foo_lock);
if (!foo)
{
foo = f = new Foo(); // assign both foo and f
}
}
return f;
}
Even better, use std::unique_ptr and std::once to get automatic cleanup and less scaffolding
class Foo
1. Pointer must be atomic
2. Check, lock, check, construct
std::atomic<Foo*> foo { nullptr };
Foo* instance()
{
Foo* f = foo; // single load of foo
if (!f)
{
std::lock_guard<std::mutex> l(foo_lock);
if (!foo)
{
foo = f = new Foo(); // assign both foo and f
}
}
return f;
}
Even better, use std::unique_ptr and std::once to get automatic cleanup and less scaffolding
class Foo
{
public:
static Foo& instance()
{
std::call_once(_create, [=]{
_instance = std::make_unique<Foo>();
});
return *_instance;
}
private:
static std::unique_ptr<Foo> _instance;
static std::once_flag _create;
};
Or just use a function local static
Foo& Foo::instance()
{
static Foo foo;
return foo;
}
C++ decltype and auto type deduction
auto type deduction strips const, volatile and ref
const int& bar = foo;
auto baz = bar; // strips const and ref - therefore type of baz is int
decltype type deduction doesn't strip const, volatile and ref
const int& bar = foo;
decltype(bar) // does not strip const and ref - therefore type is const int&
const int& bar = foo;
auto baz = bar; // strips const and ref - therefore type of baz is int
decltype type deduction doesn't strip const, volatile and ref
// decltype of a name
const int& bar = foo;
decltype(bar) // does not strip const and ref - therefore type is const int&
// decltype of an expression
decltype(lvalue expression) always returns an lvalue reference
int arr[5];
arr[0] = 5;
decltype(arr[0]) // lvalue reference, therefore type is int&
C++ type information at run time
std::type_info::name and typeid(T).name() will give incorrect results, as required by the standard
use Boost.TypeIndex
#include <boost/type_index.hpp>
boost::type_index::type_id_with_cvr<T>().pretty_name();
boost::type_index::type_id_with_cvr<decltype(t)>().pretty_name();
use Boost.TypeIndex
#include <boost/type_index.hpp>
boost::type_index::type_id_with_cvr<T>().pretty_name();
boost::type_index::type_id_with_cvr<decltype(t)>().pretty_name();
C++14 mutable lambda and by-value and by-value init capture
by-value capture vs by-value init capture
by-value capture: type of `i` is `const int`
{
const int i = 0;
auto lambda = [i]() { };
}
by-value init capture: type of `i` is `int`
{
const int i = 0;
auto lambda = [i=i]() { };
}
lambda function call operator is const
error: by-value capture: type of `i` is `int`, but default lambda operator() is const member function
{
int i = 0;
auto lambda = [i]() { i = 1; };
}
error: by-value init capture: type of `i` is `int`, but default lambda operator() is const member function
{
const int i = 0;
auto lambda = [i=i]() { i = 1; };
}
making lambda function call operator mutable
error: by-value capture: type of `i` is `const int`, can't assign, even though lambda operator() is mutable member function
{
const int i = 0;
auto lambda = [i]() mutable { i = 1; };
}
by-value capture: type of `i` is `int`, and lambda operator() is mutable member function
{
int i = 0;
auto lambda = [i]() mutable { i = 1; };
}
by-value init capture: type of `i` is `int`, and lambda operator() is mutable member function
{
const int i = 0;
auto lambda = [i=i]() mutable { i = 1; };
}
by-value capture: type of `i` is `const int`
{
const int i = 0;
auto lambda = [i]() { };
}
by-value init capture: type of `i` is `int`
{
const int i = 0;
auto lambda = [i=i]() { };
}
lambda function call operator is const
error: by-value capture: type of `i` is `int`, but default lambda operator() is const member function
{
int i = 0;
auto lambda = [i]() { i = 1; };
}
error: by-value init capture: type of `i` is `int`, but default lambda operator() is const member function
{
const int i = 0;
auto lambda = [i=i]() { i = 1; };
}
making lambda function call operator mutable
error: by-value capture: type of `i` is `const int`, can't assign, even though lambda operator() is mutable member function
{
const int i = 0;
auto lambda = [i]() mutable { i = 1; };
}
by-value capture: type of `i` is `int`, and lambda operator() is mutable member function
{
int i = 0;
auto lambda = [i]() mutable { i = 1; };
}
by-value init capture: type of `i` is `int`, and lambda operator() is mutable member function
{
const int i = 0;
auto lambda = [i=i]() mutable { i = 1; };
}
Monday, 18 August 2014
Python for data anlysis
numpy
Array creation functions
array
: Convert input data (list, tuple, array, or other sequence type) to an ndarray either by inferring a dtype or explicitly specifying a dtype. Copies the input data by default. asarray
: Convert input to ndarray, but do not copy if the input is already an ndarray arange
: Like the built-in range but returns an ndarray instead of a list. ones, ones_like
: Produce an array of all 1’s with the given shape and dtype. ones_like takes another array and produces a ones array of the same shape and dtype. zeros, zeros_like
: Like ones and ones_like but producing arrays of 0’s instead empty, empty_like
: Create new arrays by allocating new memory, but do not populate with any values like ones and zeros eye, identity
: Create a square N x N identity matrix (1’s on the diagonal and 0’s elsewhere)Unary ufuncs
abs, fabs
: Compute the absolute value element-wise for integer, floating point, or complex values. Use fabs as a faster alternative for non-complex-valued data sqrt
: Compute the square root of each element. Equivalent to arr ** 0.5
square
: Compute the square of each element. Equivalent to arr ** 2
exp
: Compute the exponent e^x of each element log, log10, log2, log1p
: Natural logarithm (base e), log base 10, log base 2, and log(1 + x), respectively sign
: Compute the sign of each element: 1 (positive), 0 (zero), or -1 (negative) ceil
: Compute the ceiling of each element, i.e. the smallest integer greater than or equal to each element
floor
: Compute the floor of each element, i.e. the largest integer less than or equal to each element
rint
: Round elements to the nearest integer, preserving the dtype modf
: Return fractional and integral parts of array as separate array isnan
: Return boolean array indicating whether each value is NaN (Not a Number) isfinite, isinf
: Return boolean array indicating whether each element is finite (non-inf, non-NaN) or infinite, respectively cos, cosh, sin, sinh, tan, tanh
: Regular and hyperbolic trigonometric functions arccos, arccosh, arcsin, arcsinh, arctan, arctanh
: Inverse trigonometric functions logical_not
: Compute truth value of not x element-wise. Equivalent to -arr
.Binary universal functions
add
: Add corresponding elements in arrays subtract
: Subtract elements in second array from first array multiply
: Multiply array elements divide, floor_divide
: Divide or floor divide (truncating the remainder) power
: Raise elements in first array to powers indicated in second array maximum, fmax
: Element-wise maximum. fmax ignores NaN minimum, fmin
: Element-wise minimum. fmin ignores NaN mod
: Element-wise modulus (remainder of division) copysign
: Copy sign of values in second argument to values in first argument greater, greater_equal, less, less_equal, equal, not_equal
: Perform element-wise comparison, yielding boolean array. Equivalent to infix operators >, >=, <, <=, ==, != logical_and, logical_or, logical_xor
: Compute element-wise truth value of logical operation. Equivalent to infix operators & |, ^Basic array statistical methods
sum
: Sum of all the elements in the array or along an axis. Zero-length arrays have sum 0. mean
: Arithmetic mean. Zero-length arrays have NaN mean. std, var
: Standard deviation and variance, respectively, with optional degrees of freedom adjustment (default denominator n). min, max
: Minimum and maximum. argmin, argmax
: Indices of minimum and maximum elements, respectively. cumsum
: Cumulative sum of elements starting from 0 cumprod
: Cumulative product of elements starting from 1Array set operations
unique(x)
: Compute the sorted, unique elements in x intersect1d(x, y)
: Compute the sorted, common elements in x and y union1d(x, y)
: Compute the sorted union of elements in1d(x, y)
: Compute a boolean array indicating whether each element of x is contained in y setdiff1d(x, y)
: Set difference, elements in x that are not in y setxor1d(x, y)
: Set symmetric differences; elements that are in either of the arrays, but not bothLinear Algebra
diag
: Return the diagonal (or off-diagonal) elements of a square matrix as a 1D array, or convert a 1D array into a square matrix with zeros on the off-diagonal dot
: Matrix multiplication trace
: Compute the sum of the diagonal elements det
: Compute the matrix determinant eig
: Compute the eigenvalues and eigenvectors of a square matrix inv
: Compute the inverse of a square matrix pinv
: Compute the Moore-Penrose pseudo-inverse inverse of a square matrix qr
: Compute the QR decomposition svd
: Compute the singular value decomposition (SVD) solve
: Solve the linear system Ax = b for x, where A is a square matrix lstsq
: Compute the least-squares solution to y = XbRandom Number Generation
seed
: Seed the random number generator permutation
: Return a random permutation of a sequence, or return a permuted range shuffle
: Randomly permute a sequence in place rand
: Draw samples from a uniform distribution randint
: Draw random integers from a given low-to-high range randn
: Draw samples from a normal distribution with mean 0 and standard deviation 1 (MATLAB-like interface) binomial
: Draw samples a binomial distribution normal
: Draw samples from a normal (Gaussian) distribution beta
: Draw samples from a beta distribution chisquare
: Draw samples from a chi-square distribution gamma
: Draw samples from a gamma distribution uniform
: Draw samples from a uniform [0, 1) distributionTaken from Python for Data Anlysis by Wes McKinney
Sunday, 17 August 2014
IPython debugging
Debugger commands
h
(help) Display command listhelp command
Show documentation for commandc
(continue) Resume program executionq
(quit) Exit debugger without executing any more codeb
(break) number Set breakpoint at number in current fileb path/to/file.py:number
Set breakpoint at line number in specified files
(step) Step into function calln
(next) Execute current line and advance to next line at current levelu
/d
(up) / (down) Move up/down in function call stacka
(args) Show arguments for current functiondebug statement
Invoke statement statement in new (recursive) debuggerl
(list) statement Show current position and context at current level of stackw
(where) Print full stack trace with context at current position
Post-mortem debugging
%debug
Entering %debug
immediately after an exception has occurred drops you into the stack frame where the exception was raisedUtility functions
Poor man’s breakpointdef set_trace():
from IPython.core.debugger import Pdb
Pdb(color_scheme='Linux').set_trace(sys._getframe().f_back)
Putting set_trace()
in your code will automatically drop into the debugger when the line is executed.Interactive function debugging
def debug(f, *args, **kwargs):
from IPython.core.debugger import Pdb
pdb = Pdb(color_scheme='Linux')
return pdb.runcall(f, *args, **kwargs)
Passing a function to debug
will drop you into the debugger for an arbitrary function call. debug(fn, arg1, arg2, arg3, kwarg=foo, kwarg=bar)
Interactive script debugging
Executing a script via%run
with -d
will start the script in the debugger%run -d ./my_script.py
Specifying a line number with -b
starts the script with a breakpoint already set%run -d -b20 ./my_script.py # sets a breakpoint on line 20
Taken from Python for Data Anlysis by Wes McKinney
Sunday, 29 June 2014
bjam / boost.build
Boost.Build
Common signature:
rule rule-name
(
target-name :
sources + :
requirements * :
default-build * :
usage-requirements *
)
target-name is the name used to request the target
sources is the list of source files or other targets
requirements is the list of properties that must always be present when building this target
default-build is the list of properties that will be used unless some other value is already specified (eg: on cmd line or propagation from a dependent target)
usage-requirements is the properties that will be propagated to all targets that use this one
Helper commands:
glob - takes a list shell pattern and returns the list of files in the project's source directory that match the pattern. optional second argument is a list of exclude patterns
lib tools : [ glob *.cpp : exclude.cpp ] ;
glob-tree - recursive glob
lib tools : [ glob-tree *.cpp : .svn ] ;
constant - project wide constant
constant VERSION : 1.34.0 ;
Project:
project project-name
: requirements <feature>value <feature>value
;
Programs:
exe app-name
: app.cpp some_library.lib ../project//library
: <threading>multi
;
sources is one cpp file (app.cpp), a library in the same directory (some_library.lib) and a Jamfile target (library) specified in the Jamfile found in the path ../project
requirements is that threading is multi
Libraries:
Library targets can represent:
Libraries that should be built from source
Common signature:
rule rule-name
(
target-name :
sources + :
requirements * :
default-build * :
usage-requirements *
)
target-name is the name used to request the target
sources is the list of source files or other targets
requirements is the list of properties that must always be present when building this target
default-build is the list of properties that will be used unless some other value is already specified (eg: on cmd line or propagation from a dependent target)
usage-requirements is the properties that will be propagated to all targets that use this one
Helper commands:
glob - takes a list shell pattern and returns the list of files in the project's source directory that match the pattern. optional second argument is a list of exclude patterns
lib tools : [ glob *.cpp : exclude.cpp ] ;
glob-tree - recursive glob
lib tools : [ glob-tree *.cpp : .svn ] ;
constant - project wide constant
constant VERSION : 1.34.0 ;
Project:
project project-name
: requirements <feature>value <feature>value
;
Programs:
exe app-name
: app.cpp some_library.lib ../project//library
: <threading>multi
;
sources is one cpp file (app.cpp), a library in the same directory (some_library.lib) and a Jamfile target (library) specified in the Jamfile found in the path ../project
requirements is that threading is multi
Libraries:
Library targets can represent:
Libraries that should be built from source
lib lib-name
: lib.cpp
;
sources is one cpp file (lib.cpp)
Prebuilt libraries which already exist on the system
Such libraries can be searched for by the tools using them (typically with the linker's -l option), or their paths can be known in advance by the build system.
: lib.cpp
;
sources is one cpp file (lib.cpp)
Prebuilt libraries which already exist on the system
Such libraries can be searched for by the tools using them (typically with the linker's -l option), or their paths can be known in advance by the build system.
lib z
:
: <name>z <search>../3rd/libz
;
:
: <name>z <search>../3rd/libz
;
lib compress
:
: <file>/opt/libs/libcompress.a
;
:
: <file>/opt/libs/libcompress.a
;
<name> specifies the name of the library without the standard prefixes and suffixes.
In the above example, z could refer to z.so, libz.a, z.lib etc
<search> specifies paths in which to search for the library (in addition to the default compiler paths)
<search> can be specified multiple times, or omitted (meaning only the default compiler paths will be searched)
Note that <search> paths are added to the linker search path (-L) for all libraries being linked in the target, which can potentially lead to libraries from another path being picked up first
Convenience helper syntax for prebuilt libraries
lib z ;
lib gui db aux ;
is the same as
lib z : : <name>z ;
lib gui : : <name>gui ;
lib db : : <name>db ;
lib aux : : <name>aux ;
Prebuilt libraries for different build variants
lib foo
:
: <file>libfoo_release.a <variant>release
;
lib foo
:
: <file>libfoo_debug.a <variant>debug
;
Referencing other libraries
When a library references another library, that library should be listed in its list of sources.
Specify library dependencies even for searched and prebuilt libraries
lib z ;
lib png : z : <name>png ;
How Boost.Build includes library dependencies
When a library has a shared library as a source, or a static library has another static library as a source, then an target linking to the first library will also automatically link to the source library
However, when a shared library has a static library as a source, then the shared library will be built such that it completely includes the static library (--whole-archive)
If you don't want this behaviour, you need to use the following:
lib a : a.cpp : <use>b : : <library>b ;
This says that library uses library b, and causes executables that link to a to also link to b, instead of a referring to b
Automatically add a library's header location to any upstream target's include path
When a library's interface is in a header file, you can set usage-requirements for the library to include the path where the header file is, so that any target using the library target will automatically get the path to its header added to its include search path
lib foo : foo.cpp : : : <include>. ;
Control library linking order
If library a "uses" library b, then library a will appear before library b.
Library a is considered to use library b is b is present either in library a's sources or its usage is listed in its requirements
The <use> feature can also be used to explicitly express a relationship.
lib z ;
lib png : : <use>z ;
exe viewer : viewer png z ;
z will be linked before png
Special helper for zlib.
zlib can be configured either to use precompiled binaries or to build the library from source.
Find zlib in the default system location
using zlib ;
Build zlib from source
using zlib : 1.2.7 : <source>/home/steven/zlib-1.2.7 ;
Find zlib in /usr/local
using zlib : 1.2.7 : <include>/usr/local/include <search>/usr/local/lib ;
Build zlib from source for msvc and find prebuilt binaries for gcc.
using zlib : 1.2.7 : <source>C:/Devel/src/zlib-1.2.7 : <toolset>msvc ;
using zlib : 1.2.7 : : <toolset>gcc ;
Builtin features:
variant - build variant. Default configuration values: debug, release, profile.
link - library linking. values: shared, static
runtime-link - binary linking. values: shared, static
threading - link additional threading libraries. values: single, multi
source - useful for adding the same source to all targets in the project (put <source> in requirements), or to conditionally include a source
library - useful for linking to the same libraries for all targets in the project
dependency - introduces a dependency on the target named by the value. If the declared target is built, the dependent target will be too
implicit-dependency - indicates the target named by the value may produce files which the declared target uses.
use - introduces a dependency on the target named by the value, and adds its usage requirements to the build properties of the target being declared. The dependency is not used in any other way.
dll-path - add a shared library search path.
hardcode-dll-path - hardcode the dll-path entries. Values: true, false.
cflags, cxxflags, linkflags - passed on to the corresponding tools.
include - add an include search path.
define - define a preprocessor symbol. A value can be specified: <define>symbol=value
warnings - control the warning level of the compiler. Values: off, on, all.
warnings-as-errors - turn on to have builds fail when a warning is emitted.
build - skips building the target. Useful to conditionally set the value. Values: no.
tag - customize the name of generated files. Value: @rulename, where rulename is the name of a rule with the signature: rule tag ( name : type ? : property-set ). The rule will be called for each target with the default name of the target, the type of the target, and property set. Return an empty string to use the default target name, or a non empty string to be used for the name of the target. Useful for encoding library version nos etc.
debug-symbols - Include debug symbols in the object files etc. Values: on, off.
Objects:
Change behaviour for only a single object file
obj foo : foo.cpp : <optimizarion>off ;
exe bar : bar.cpp foo ;
foo will be built with the special flags, and then van be pulled into other targets
Alias:
Alternative name for a group of targets
alias core : foo bar baz ;
Using core in the source list of any other target or on the command line will translate to the aliased group of targets
Change build properties
alias my_bar : ../foo//bar : <link>static ;
my_bar now refers to the bar target in the foo Jamfile, but has the requirement that it be linked statically
Specify a header only library
alias hdr_only_lib : : : : <include>/path/to/headers ;
Using hdr_only_lib will just add an include path to any targets
Propagation of usage-requirements
When an alias has sources, the usage-requirements of those sources are propagated as well.
lib lib1 : lib1src.cpp : : : <include>/path/to/lib1.hpp ;
Installing:
Installing a built target to a relative path
install dist : foo bar ;
foo and bar will be moved to the dist folder, relative to the Jamfile's directory
Installing a built target to specific location
install dist : foo bar : <location>/install/path/location
foo and bar will be moved to /install/path/location
Installing a built target to a path based on a conditional expression
install dist
: foo bar
: <variant>release:<location>dist/release
<variant>debug:<location>dist/debug ;
foo and bar will be installed to relative path dist/<build-variant>
Installing a built target to a path based on an environment variable
(see accessing environment variables below)
install dist : foo bar : <location>$(DIST) ;
Automatically install all dependencies
install dist
: foo
: <install-dependencies>on
<install-type>EXE
<install-type>LIB
;
Preserve directory hierarchy
install headers
: a/b/c.h
: <location>/tmp
<install-source-root>a
;
Install into several directories
use an alias rule to install to several directories
alias install : install-bin install-lib ;
install install-bin : apps : <location>/usr/bin ;
install install-lib : libs : <location>/usr/lib ;
set the RPATH
install installed : application : <dll-path>/usr/lib/snake
<location>/usr/bin ;
will allow the application to find libraries placed in the /usr/lib/snake directory.
Testing:
unit-testing
behaves just like exe rule, but the test is automatically run after building
runs the test through the launcher, eg: valgrind build/path/foo_test
Environment variables:
local foo = [ SHELL "bar" ] ;
Executing external programs:
import os ;
local SOME_PATH = [ os.environ SOME_PATH ] ;
exe foo : foo.cpp : <include>$(SOME_PATH) ;
Conditional expressions:
syntax
property ( "," property ) * ":" property
multiple properties can be combined
will link hello statically only when compiling with gcc on NT
Command reference:
http://www.boost.org/doc/libs/1_55_0/doc/html/bbv2/reference.html
In the above example, z could refer to z.so, libz.a, z.lib etc
<search> specifies paths in which to search for the library (in addition to the default compiler paths)
<search> can be specified multiple times, or omitted (meaning only the default compiler paths will be searched)
Note that <search> paths are added to the linker search path (-L) for all libraries being linked in the target, which can potentially lead to libraries from another path being picked up first
Convenience helper syntax for prebuilt libraries
lib z ;
lib gui db aux ;
is the same as
lib z : : <name>z ;
lib gui : : <name>gui ;
lib db : : <name>db ;
lib aux : : <name>aux ;
Prebuilt libraries for different build variants
lib foo
:
: <file>libfoo_release.a <variant>release
;
lib foo
:
: <file>libfoo_debug.a <variant>debug
;
Referencing other libraries
When a library references another library, that library should be listed in its list of sources.
Specify library dependencies even for searched and prebuilt libraries
lib z ;
lib png : z : <name>png ;
How Boost.Build includes library dependencies
When a library has a shared library as a source, or a static library has another static library as a source, then an target linking to the first library will also automatically link to the source library
However, when a shared library has a static library as a source, then the shared library will be built such that it completely includes the static library (--whole-archive)
If you don't want this behaviour, you need to use the following:
lib a : a.cpp : <use>b : : <library>b ;
This says that library uses library b, and causes executables that link to a to also link to b, instead of a referring to b
Automatically add a library's header location to any upstream target's include path
When a library's interface is in a header file, you can set usage-requirements for the library to include the path where the header file is, so that any target using the library target will automatically get the path to its header added to its include search path
lib foo : foo.cpp : : : <include>. ;
Control library linking order
If library a "uses" library b, then library a will appear before library b.
Library a is considered to use library b is b is present either in library a's sources or its usage is listed in its requirements
The <use> feature can also be used to explicitly express a relationship.
lib z ;
lib png : : <use>z ;
exe viewer : viewer png z ;
z will be linked before png
Special helper for zlib.
zlib can be configured either to use precompiled binaries or to build the library from source.
Find zlib in the default system location
using zlib ;
Build zlib from source
using zlib : 1.2.7 : <source>/home/steven/zlib-1.2.7 ;
Find zlib in /usr/local
using zlib : 1.2.7 : <include>/usr/local/include <search>/usr/local/lib ;
Build zlib from source for msvc and find prebuilt binaries for gcc.
using zlib : 1.2.7 : <source>C:/Devel/src/zlib-1.2.7 : <toolset>msvc ;
using zlib : 1.2.7 : : <toolset>gcc ;
Builtin features:
variant - build variant. Default configuration values: debug, release, profile.
link - library linking. values: shared, static
runtime-link - binary linking. values: shared, static
threading - link additional threading libraries. values: single, multi
source - useful for adding the same source to all targets in the project (put <source> in requirements), or to conditionally include a source
library - useful for linking to the same libraries for all targets in the project
dependency - introduces a dependency on the target named by the value. If the declared target is built, the dependent target will be too
implicit-dependency - indicates the target named by the value may produce files which the declared target uses.
use - introduces a dependency on the target named by the value, and adds its usage requirements to the build properties of the target being declared. The dependency is not used in any other way.
dll-path - add a shared library search path.
hardcode-dll-path - hardcode the dll-path entries. Values: true, false.
cflags, cxxflags, linkflags - passed on to the corresponding tools.
include - add an include search path.
define - define a preprocessor symbol. A value can be specified: <define>symbol=value
warnings - control the warning level of the compiler. Values: off, on, all.
warnings-as-errors - turn on to have builds fail when a warning is emitted.
build - skips building the target. Useful to conditionally set the value. Values: no.
tag - customize the name of generated files. Value: @rulename, where rulename is the name of a rule with the signature: rule tag ( name : type ? : property-set ). The rule will be called for each target with the default name of the target, the type of the target, and property set. Return an empty string to use the default target name, or a non empty string to be used for the name of the target. Useful for encoding library version nos etc.
debug-symbols - Include debug symbols in the object files etc. Values: on, off.
Change behaviour for only a single object file
obj foo : foo.cpp : <optimizarion>off ;
exe bar : bar.cpp foo ;
foo will be built with the special flags, and then van be pulled into other targets
Alias:
Alternative name for a group of targets
alias core : foo bar baz ;
Using core in the source list of any other target or on the command line will translate to the aliased group of targets
Change build properties
alias my_bar : ../foo//bar : <link>static ;
my_bar now refers to the bar target in the foo Jamfile, but has the requirement that it be linked statically
Specify a header only library
alias hdr_only_lib : : : : <include>/path/to/headers ;
Using hdr_only_lib will just add an include path to any targets
Propagation of usage-requirements
When an alias has sources, the usage-requirements of those sources are propagated as well.
lib lib1 : lib1src.cpp : : : <include>/path/to/lib1.hpp ;
lib lib2 : lib2src.cpp : : : <include>/path/to/lib2.hpp ;
alias static_libs : lib1 lib2 : <link>static ;
exe main : main.cpp static_libs ;
Compile main with lib1 and lib2 as static libraries, and their paths are added to the include search path
Installing:
Installing a built target to a relative path
install dist : foo bar ;
foo and bar will be moved to the dist folder, relative to the Jamfile's directory
Installing a built target to specific location
foo and bar will be moved to /install/path/location
Installing a built target to a path based on a conditional expression
(see conditional expressions below)
: foo bar
: <variant>release:<location>dist/release
<variant>debug:<location>dist/debug ;
foo and bar will be installed to relative path dist/<build-variant>
Installing a built target to a path based on an environment variable
(see accessing environment variables below)
: foo
: <install-dependencies>on
<install-type>EXE
<install-type>LIB
;
will find all targets foo depends on, and install those which are either executables or libraries.
: a/b/c.h
: <location>/tmp
<install-source-root>a
;
/tmp/b/c.h will be installed
use an alias rule to install to several directories
install install-bin : apps : <location>/usr/bin ;
install install-lib : libs : <location>/usr/lib ;
set the RPATH
install installed : application : <dll-path>/usr/lib/snake
<location>/usr/bin ;
will allow the application to find libraries placed in the /usr/lib/snake directory.
Testing:
unit-testing
unit-test foo_test : test.cpp foo ;
behaves just like exe rule, but the test is automatically run after building
testing through another application
unit-test foo_test : test.cpp foo : <testing.launcher>valgrind ;
runs the test through the launcher, eg: valgrind build/path/foo_test
local foo = [ SHELL "bar" ] ;
Executing external programs:
import os ;
local SOME_PATH = [ os.environ SOME_PATH ] ;
exe foo : foo.cpp : <include>$(SOME_PATH) ;
Conditional expressions:
syntax
property ( "," property ) * ":" property
multiple properties can be combined
exe hello : hello.cpp : <os>NT,<toolset>gcc:<link>static ;
will link hello statically only when compiling with gcc on NT
Command reference:
http://www.boost.org/doc/libs/1_55_0/doc/html/bbv2/reference.html
Tuesday, 24 June 2014
Sublime Text for C++ development
Best of Sublime Text
http://scotch.io/bar-talk/best-of-sublime-text-3-features-plugins-and-settings
Project -> Save Project As...
bjam build system
{
"shell_cmd": "bjam",
"file_regex": "^(..[^:]*):([0-9]+):?([0-9]+)?:? (.*)$",
"selector": "source.cpp"
}
http://scotch.io/bar-talk/best-of-sublime-text-3-features-plugins-and-settings
Project -> Save Project As...
bjam build system
{
"shell_cmd": "bjam",
"file_regex": "^(..[^:]*):([0-9]+):?([0-9]+)?:? (.*)$",
"selector": "source.cpp"
}
Packages:
Sublime GDB
CTags
Sofa theme
Sidebar Enhancements
Bracket highlighter
SFTP
Git
Git Gutter
Advanced New File
Terminal
Markdown Editing
Sublime REPL
Labels:
c++,
cpp,
programming,
sysadmin
Thursday, 19 June 2014
Eclipse configuration
Install from eclipse site, not apt-get:
http://www.eclipse.org/downloads/
I decompressed it into /opt/eclipse, and installed a symlink in /usr/bin
$ sudo ln -s /opt/eclipse/eclipse /usr/bin
Increase heap memory available to eclipse (prevents crashing):
$ vim /opt/eclipse/eclipse.ini
-vmargs
-Dosgi.requiredJavaVersion=1.6
-XX:MaxPermSize=1G
-Xms1G
-Xmx2G
Add support for C++11 features for the code inspection
Window -> Preferences -> C/C++ -> Build -> Settings -> Discovery (tab) -> CDT GCC Built-in Compiler Settings. There is "Command to get compiler specs", add "-std=c++11" in there.
Syntax Highlighting theme:
Add the eclipse-color-theme repo to Eclipse marketplace
Help -> Install New Software -> Add -> Location: http://eclipse-color-theme.github.com/update
Select color theme:
Window -> Preferences -> General -> Appereance -> Color Theme : select Monkai or Obsidian or RecognEyes
Editor line highlight colors, etc:
Window -> Preferences -> General -> Editors -> Text Editors
Annotations:
Window -> Preferences -> General -> Editors -> Text Editors -> Annotations
C/C++ Indexer Markers -> Uncheck all
Change default Build Action:
Window -> Preferences -> General -> Keys
Filter on "Build"
Remove Ctrl-B from Build All, and add it to Build Project
C++ build console:
Window -> Preferences -> C++ -> Build -> Console
Increase the number of lines
Set colors
Source hover popup:
Window -> Preferences -> C++ -> Editor
Source Hover Background
Automatically close:
Window -> Preferences -> C++ -> Editor -> Typing
Uncheck all auto-close
Editor mark occurrences:
Window -> Preferences -> C++ -> Editor -> Mark Occurrences
Uncheck "Keep marks when the selection changes"
Now restart eclipse to make sure your settings are saved.
Change scalability settings
Window -> Preferences -> C++ -> Editor -> Scalability
Increase number of lines to something larger
Unused:
Color theme:
http://marketplace.eclipse.org/content/eclipse-moonrise-ui-theme
Window -> Preferences -> General -> Appearance : select Dark or MoonRise
Remote System Explorer:
Help -> Install New Software
Search for Remote, I
New connection -> SSH Only
Connect
Sftp files -> navigate to src directory -> Rt click -> Create Remote Project
Project indexer search paths
Project -> Properties -> C++ General -> Paths & Symbols
Includes
Library Paths
eg:
Includes:
${QTDIR}/include
${QTDIR}/include/QtCore
${QTDIR}/include/QtWidgets
${QTDIR}/include/QtGui
Library paths
${QTDIR}/include
${QTDIR}/include/QtCore
${QTDIR}/include/QtWidgets
${QTDIR}/include/QtGui
http://www.eclipse.org/downloads/
I decompressed it into /opt/eclipse, and installed a symlink in /usr/bin
$ sudo ln -s /opt/eclipse/eclipse /usr/bin
Increase heap memory available to eclipse (prevents crashing):
$ vim /opt/eclipse/eclipse.ini
-vmargs
-Dosgi.requiredJavaVersion=1.6
-XX:MaxPermSize=1G
-Xms1G
-Xmx2G
Add support for C++11 features for the code inspection
Window -> Preferences -> C/C++ -> Build -> Settings -> Discovery (tab) -> CDT GCC Built-in Compiler Settings. There is "Command to get compiler specs", add "-std=c++11" in there.
Syntax Highlighting theme:
Add the eclipse-color-theme repo to Eclipse marketplace
Help -> Install New Software -> Add -> Location: http://eclipse-color-theme.github.com/update
Select color theme:
Window -> Preferences -> General -> Appereance -> Color Theme : select Monkai or Obsidian or RecognEyes
Editor line highlight colors, etc:
Window -> Preferences -> General -> Editors -> Text Editors
Annotations:
Window -> Preferences -> General -> Editors -> Text Editors -> Annotations
C/C++ Indexer Markers -> Uncheck all
C/C++ Occurrences -> Uncheck Text as Squiggly Line
Codan Errors -> Uncheck all
Codan Warnings -> Uncheck all
Codan Warnings -> Uncheck all
Window -> Preferences -> General -> Keys
Filter on "Build"
Remove Ctrl-B from Build All, and add it to Build Project
Window -> Preferences -> C++ -> Build -> Console
Increase the number of lines
Set colors
Source hover popup:
Window -> Preferences -> C++ -> Editor
Source Hover Background
Automatically close:
Window -> Preferences -> C++ -> Editor -> Typing
Uncheck all auto-close
Editor mark occurrences:
Window -> Preferences -> C++ -> Editor -> Mark Occurrences
Uncheck "Keep marks when the selection changes"
Now restart eclipse to make sure your settings are saved.
Change scalability settings
Window -> Preferences -> C++ -> Editor -> Scalability
Increase number of lines to something larger
Unused:
Color theme:
http://marketplace.eclipse.org/content/eclipse-moonrise-ui-theme
Window -> Preferences -> General -> Appearance : select Dark or MoonRise
Remote System Explorer:
Help -> Install New Software
Search for Remote, I
New connection -> SSH Only
Connect
Sftp files -> navigate to src directory -> Rt click -> Create Remote Project
Project indexer search paths
Project -> Properties -> C++ General -> Paths & Symbols
Includes
Library Paths
eg:
Includes:
${QTDIR}/include
${QTDIR}/include/QtCore
${QTDIR}/include/QtWidgets
${QTDIR}/include/QtGui
${QTDIR}/include
${QTDIR}/include/QtCore
${QTDIR}/include/QtWidgets
${QTDIR}/include/QtGui
Wednesday, 4 June 2014
Setting up an ubuntu vagrant instance
Install vagrant
Download the latest deb from http://www.vagrantup.com/downloads.html
Install vagrant from downloaded deb
$ sudo dpkg -i ./vagrant.deb
Install virtualbox
Add the appropriate deb source to your apt sources
$ echo "deb http://download.virtualbox.org/virtualbox/debian trusty contrib" | sudo tee -a /etc/apt/sources.list
Add the oracle public key
$ wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add -
Update the apt cache
$ sudo apt-get update
Install virtualbox
$ sudo apt-get install virtualbox-4.3
Note that if you're loading a 64 bit vm you need to have hardware virtualisation enabled in your bios (and your processor needs to support it!)
ssh into vm
$ vagrant ssh
halt/suspend/destroy vm
Download the latest deb from http://www.vagrantup.com/downloads.html
Install vagrant from downloaded deb
Install virtualbox
Add the appropriate deb source to your apt sources
$ echo "deb http://download.virtualbox.org/virtualbox/debian trusty contrib" | sudo tee -a /etc/apt/sources.list
Add the oracle public key
$ wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add -
Update the apt cache
$ sudo apt-get update
Install virtualbox
$ sudo apt-get install virtualbox-4.3
Initialise a vagrant instance
Note that if you're loading a 64 bit vm you need to have hardware virtualisation enabled in your bios (and your processor needs to support it!)
$ vagrant init ubuntu/trusty64
$ vagrant up
ssh into vm
$ vagrant ssh
halt/suspend/destroy vm
$ vagrant suspend saves current state of vm and stops it - fast to resume, uses more space
$ vagrant halt saves current state of vm and shuts down - slower to resume, less space
$ vagrant destroy destroys all traces of the vm - no space used
Enable remote ssh access
By default vagrant will only create a private network between the host and vm. By changing to a public network, the vm will be allocated an ip address from your LAN and you will be able to ssh in from a remote machine
In the Vagranfile:
config.vm.network "public_network", bridge: 'eth0'
Reload Vagrantfile:
$ vagrant reload
You can ssh into the vm (vagrant ssh) and find out the ip address (ifconfig), allowing you to now ssh directly into the machine
$ ssh vagrant@192.168.1.xxx
Enable provisioning of the vm with ansible
Requires ansible to be installed
Enable remote ssh access
By default vagrant will only create a private network between the host and vm. By changing to a public network, the vm will be allocated an ip address from your LAN and you will be able to ssh in from a remote machine
In the Vagranfile:
config.vm.network "public_network", bridge: 'eth0'
Reload Vagrantfile:
$ vagrant reload
You can ssh into the vm (vagrant ssh) and find out the ip address (ifconfig), allowing you to now ssh directly into the machine
$ ssh vagrant@192.168.1.xxx
Enable provisioning of the vm with ansible
Requires ansible to be installed
In the Vagrantfile
config.vm.provision :ansible do |ansible|
ansible.playbook = "ansible/provision.yml"
ansible.inventory_path = "ansible/hosts"
ansible.limit = "all"
end
Create a file called ansible/hosts which has the vagrant vm listed in it
[vagrant]
192.168.1.xxx
Create a file called ansible/provision.yml which will be our playbook
---
- hosts: vagrant
tasks:
- name: test vm is up
ping:
config.vm.provision :ansible do |ansible|
ansible.playbook = "ansible/provision.yml"
ansible.inventory_path = "ansible/hosts"
ansible.limit = "all"
end
Create a file called ansible/hosts which has the vagrant vm listed in it
[vagrant]
192.168.1.xxx
Create a file called ansible/provision.yml which will be our playbook
---
- hosts: vagrant
tasks:
- name: test vm is up
ping:
Provision the vm
$ vagrant provision
Thursday, 29 May 2014
Debugging mocha tests
launch node-inspector in the background
$ node-inspector &
run your mocha tests in debug mode
$ mocha --debug-brk /path/to/test.js
point Chrome to node-inspector
http://localhost:8080/debug?port=5858
$ node-inspector &
run your mocha tests in debug mode
$ mocha --debug-brk /path/to/test.js
point Chrome to node-inspector
http://localhost:8080/debug?port=5858
Wednesday, 28 May 2014
MongoDb quick reference
In the mongo shell
# show all databases
show dbs
# switch to a database
use <database>
# drop database
db.dropDatabase();
# show all databases
show dbs
# switch to a database
use <database>
# drop database
db.dropDatabase();
RAID: creation and monitoring hard drive and raid health
Create raid array
Create raid 6 array with 4 disks
$ mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sd[b-e]
Save your raid configuration
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
in /etc/mdadm/mdadm.conf rename
ARRAY /dev/md0 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
to:
ARRAY /dev/md0 UUID=aa1f85b0:a2391657:cfd38029:772c560e
and recreate the initrd for the kernel and include the configuration files relevant for the MD-RAID configuration
$ sudo update-initramfs -u
Create filesystem
To create our file system for best performance we need to calculate our stride and stripe sizes.
We find the Array chunk size by looking at /proc/mdstat or using mdadm
$ cat /proc/mdstat
... 512k chunk ...
Stripe size: multiply the stripe by the number of data disks.
In raid-6, 2 disks are used for parity, and in raid-5, 1 disk is used for parity.
In my example I have 4 disks and am using raid-6, therefore I have 2 data disks.
Therefore, I have a stripe size of 128 * 2 = 256.
Create the file system:
$ mkfs.ext4 -b 4096 -E stride=128,stripe-width=256 /dev/md0
Mount filesystem
$ sudo mkdir /mnt/raid
$ sudo apt-get install nfs-kernel-server
Mount the raid drive in the exported tree
$ sudo mkdir /export
Configure /etc/exports
/export 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/raid 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,async)
smartmontools: monitor S.M.A.R.T. ( (Self-Monitoring, Analysis and Reporting Technology) attributes and run hard drive self-tests.
enable SMART support if it's not already on
$ for i in `ls -1 /dev/sd[a-z]`; do smartctl -s on $i; done
turn on offline data collection
$ for i in `ls -1 /dev/sd[a-z]`; do smartctl -o on $i; done
enable autosave of device vendor-specific attributes
pick up changes
service smartd restart
Other options for smartd.conf:
-a \ # Implies all standard testing and reporting.
-n standby,10,q \ # Don't spin up disk if it is currently spun down
\ # unless it is 10th attempt in a row.
\ # Don't report unsuccessful attempts anyway.
-o on \ # Automatic offline tests (usually every 4 hours).
-S on \ # Attribute autosave (I don't really understand
\ # what it is for. If you can explain it to me
\ # please drop me a line.
-R 194 \ # Show real temperature in the logs.
-R 231 \ # The same as above.
-I 194 \ # Ignore temperature attribute changes
-W 3,50,50 \ # Notify if the temperature changes 3 degrees
\ # comparing to the last check or if
\ # the temperature exceeds 50 degrees.
-s (S/../.././02|L/../../1/22) \ # short test: every day 2-3am
\ # long test every Monday 10pm-2am
\ # (Long test takes a lot of time
\ # and it should be finished before
\ # daily short test starts.
\ # At 3am every day this disk will be
\ # used heavily as backup storage)
-m root \ # To whom we should send mails.
-M exec /usr/libexec/smartmontools/smartdnotify
Tools for monitoring:
logwatch – monitors my /var/log/messages for anything out of the ordinary and mails me the output on a daily basis.
mdadm – mdadm will mail me if a disk has completely failed or the raid for some other reason fails. A complete resync is done every week.
smartd – I have smartd running “short” tests every night and long tests every second week. Reports are mailed to me.
munin – graphical and historical monitoring of performance and all stats of the server.
Create raid 6 array with 4 disks
$ mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sd[b-e]
Save your raid configuration
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
in /etc/mdadm/mdadm.conf rename
ARRAY /dev/md0 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e
to:
ARRAY /dev/md0 UUID=aa1f85b0:a2391657:cfd38029:772c560e
and recreate the initrd for the kernel and include the configuration files relevant for the MD-RAID configuration
$ sudo update-initramfs -u
Create filesystem
To create our file system for best performance we need to calculate our stride and stripe sizes.
Stride size: divide the array chunk size by the file system block size.
We find the Array chunk size by looking at /proc/mdstat or using mdadm
$ cat /proc/mdstat
... 512k chunk ...
$ mdadm --detail /dev/md{...}
...
Chunk Size : 512K
...
A block size of 4k offers best performance for ext4.
Therefore, in the above example, stride size is 512 / 4 = 128
In raid-6, 2 disks are used for parity, and in raid-5, 1 disk is used for parity.
In my example I have 4 disks and am using raid-6, therefore I have 2 data disks.
Therefore, I have a stripe size of 128 * 2 = 256.
Create the file system:
$ mkfs.ext4 -b 4096 -E stride=128,stripe-width=256 /dev/md0
Mount filesystem
$ sudo mkdir /mnt/raid
$ sudo chmod 1777 /mnt/raid
$ sudo mount -o noauto,rw,async -t ext4 /dev/md0 /mnt/raid
Make it permanent - add to /etc/fstab
/dev/md0 /mnt/raid ext4 defaults 1 2
Export via NFS
$ sudo apt-get install nfs-kernel-server
Mount the raid drive in the exported tree
$ sudo mkdir /export
$ sudo mkdir /export/raid
$ sudo mount --bind /mnt/raid /export/raid
Make it permanent - add to /etc/fstab
/mnt/raid /export/raid none bind 0 0
/export 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/raid 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,async)
Start the service
$ sudo service nfs-kernel-server restart
Monitoring drive health
smartmontools: monitor S.M.A.R.T. ( (Self-Monitoring, Analysis and Reporting Technology) attributes and run hard drive self-tests.
enable SMART support if it's not already on
$ for i in `ls -1 /dev/sd[a-z]`; do smartctl -s on $i; done
turn on offline data collection
$ for i in `ls -1 /dev/sd[a-z]`; do smartctl -o on $i; done
enable autosave of device vendor-specific attributes
$ for i in `ls -1 /dev/sd[a-z]`; do smartctl -S on $i; done
check the overall health for each drive
$ for i in `ls -1 /dev/sd[a-z]`; do RESULTS=`smartctl -H $i | grep result | cut -f6 -d' '`; echo $i: $RESULTS; done
If any drive doesn't show PASSED, immediately backup all your data as that drive is probably about to fail.
Configure smartd to automatically check drives
$ vim /etc/smartd.conf
DEVICESCAN -H -m root -M exec /usr/libexec/smartmontools/smartdnotify -n standby,10,q
DEVICESCAN means scan for all devices
-H means monitor SMART health status
-m root means mail to root
-M exec /usr/libexec/smartmontools/smartdnotify means run the smartdnotify script to email warnings
$ vim /etc/smartd.conf
DEVICESCAN -H -m root -M exec /usr/libexec/smartmontools/smartdnotify -n standby,10,q
DEVICESCAN means scan for all devices
-H means monitor SMART health status
-m root means mail to root
-M exec /usr/libexec/smartmontools/smartdnotify means run the smartdnotify script to email warnings
-n standby,10,q means don't take the disk out of standby except if it's been in standby 10 times in a row, and don't report the unsuccessful attempts.
pick up changes
service smartd restart
-a \ # Implies all standard testing and reporting.
-n standby,10,q \ # Don't spin up disk if it is currently spun down
\ # unless it is 10th attempt in a row.
\ # Don't report unsuccessful attempts anyway.
-o on \ # Automatic offline tests (usually every 4 hours).
-S on \ # Attribute autosave (I don't really understand
\ # what it is for. If you can explain it to me
\ # please drop me a line.
-R 194 \ # Show real temperature in the logs.
-R 231 \ # The same as above.
-I 194 \ # Ignore temperature attribute changes
-W 3,50,50 \ # Notify if the temperature changes 3 degrees
\ # comparing to the last check or if
\ # the temperature exceeds 50 degrees.
-s (S/../.././02|L/../../1/22) \ # short test: every day 2-3am
\ # long test every Monday 10pm-2am
\ # (Long test takes a lot of time
\ # and it should be finished before
\ # daily short test starts.
\ # At 3am every day this disk will be
\ # used heavily as backup storage)
-m root \ # To whom we should send mails.
-M exec /usr/libexec/smartmontools/smartdnotify
Note: this will email root - if you don't monitor root's mails, then you may want to redirect mails sent to root to another email address
vim /etc/aliases
root: user@gmail.com
Now we want to monitor the RAID array.
add the to/from email addresses to mdadm config
$ vim /etc/mdadm.conf
MAILADDR user@gmail.com
MAILFROM user+mdadm@gmail.com
I tested this was working by running /sbin/mdadm --monitor --scan --test
gmail automatically marked the test mail as spam, so I had to create a filter to explicitly not mark emails sent from user+mdadm@gmail.com as spam (note the +mdadm part of the email address, neat gmail trick)
replace a failed drive
find the drive's serial no
$ hdparm -i /dev/sdd | grep SerialNo
Model=WDC WD2003FZEX-00Z4SA0, FwRev=01.01A01, SerialNo=WD-WMC130D78F55
$ sudo mdadm --manage /dev/md0 --remove /dev/sdd
remove the old drive (using the above serial no to ensure you remove the correct drive), add the new one and add it to the array
$ sudo mdadm --manage /dev/md0 --add /dev/sdd
the array should start rebuilding
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdd[4] sde[3] sdb[1] sdc[0]
3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]
[>....................] recovery = 0.5% (11702016/1953382912) finish=970.5min speed=33344K/sec
unused devices: <none>
Now we want to monitor the RAID array.
add the to/from email addresses to mdadm config
$ vim /etc/mdadm.conf
MAILADDR user@gmail.com
MAILFROM user+mdadm@gmail.com
I tested this was working by running /sbin/mdadm --monitor --scan --test
replace a failed drive
find the drive's serial no
$ hdparm -i /dev/sdd | grep SerialNo
Model=WDC WD2003FZEX-00Z4SA0, FwRev=01.01A01, SerialNo=WD-WMC130D78F55
fail and remove the drive from the array
$ sudo mdadm --manage /dev/md0 --fail /dev/sdd$ sudo mdadm --manage /dev/md0 --remove /dev/sdd
remove the old drive (using the above serial no to ensure you remove the correct drive), add the new one and add it to the array
$ sudo mdadm --manage /dev/md0 --add /dev/sdd
the array should start rebuilding
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdd[4] sde[3] sdb[1] sdc[0]
3906765824 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [UU_U]
[>....................] recovery = 0.5% (11702016/1953382912) finish=970.5min speed=33344K/sec
unused devices: <none>
Tools for monitoring:
logwatch – monitors my /var/log/messages for anything out of the ordinary and mails me the output on a daily basis.
mdadm – mdadm will mail me if a disk has completely failed or the raid for some other reason fails. A complete resync is done every week.
smartd – I have smartd running “short” tests every night and long tests every second week. Reports are mailed to me.
munin – graphical and historical monitoring of performance and all stats of the server.
Thursday, 15 May 2014
byobu keyboard commands
Running byobu in screen mode - Ctrl-A is command mode
F2 open new window
shift-F2 new horizontal split
ctrl-F2 new vertical split
F3/F4 cycle through windows
alt-left/right cycle through windows
shift-F3/F4 cycle through splits
shift-left/right/up/down cycle through splits
shift-alt-left/right/up/down resize split
ctrl-F3/F4 move split
byobu-enable: Enable persistent byobu (launch automatically upon login)
F2 open new window
shift-F2 new horizontal split
ctrl-F2 new vertical split
F3/F4 cycle through windows
alt-left/right cycle through windows
shift-F3/F4 cycle through splits
shift-left/right/up/down cycle through splits
shift-alt-left/right/up/down resize split
ctrl-F3/F4 move split
ctrl-shift-F3/F4 move window
alt-F11 move split to new window
shift-F11 zoom split in/out (full screen)
F8 rename window
F6 detach session and log out
shift-F6 detach session
ctrl-F6 kill current split
F7 enter scrollback
alt-page up/page down enter and move through scrollback
enter exit scrollback
byobu-enable: Enable persistent byobu (launch automatically upon login)
Linux tools
byobu - text-based window manager and terminal multiplexer, enhanced screen
htop - interactive process viewer
nethogs - process bandwidth utilisation
nload - realtime bandwidth utilisation graph
iptraf - IP network traffic monitor
IP traffic monitor:
top pane: TCP traffic: source addresses, packets/bytes received, link status, interface
bottom pane: UDP / ICMP / broadcast traffic
Statistical breakdown by TCP/UDP
view traffic by protocol (only the common ports by default)
LAN station monitor:
IP traffic by MAC address
htop - interactive process viewer
nethogs - process bandwidth utilisation
nload - realtime bandwidth utilisation graph
iptraf - IP network traffic monitor
IP traffic monitor:
top pane: TCP traffic: source addresses, packets/bytes received, link status, interface
bottom pane: UDP / ICMP / broadcast traffic
Statistical breakdown by TCP/UDP
view traffic by protocol (only the common ports by default)
LAN station monitor:
IP traffic by MAC address
Wednesday, 14 May 2014
ITIL - Information Technology Infrastructure Library
ITIL: potentially provide a defined and structured argument for IT strategy and using IT to support and promote the business - as opposed to just being a cost centre.
http://en.wikipedia.org/wiki/Information_Technology_Infrastructure_Library
http://en.wikipedia.org/wiki/Information_Technology_Infrastructure_Library
Tuesday, 13 May 2014
angular-fullstack yeoman generator - use less version of bootstrap
I use the angular-fullstack yeoman generator, but it installs the compiled bootstrap css and I want to use the less version.
Here is how I hack the yeomen generated source to change it to use the less version of bootstrap:
# install less and less grunt task:
sudo npm install -g less
# rename the old app/styles/main.css to app/styles/inputs.less:
mv app/styles/main.css app/styles/inputs.less
# create less grunt tasks in Gruntfile.js:
files: ['<%= yeoman.app %>/styles/{,*/}*.{css,less}'],
# add 'less:dev' to tasks in the styles section of the watch grunt task:
tasks: ['less:dev', 'newer:copy:styles', 'autoprefixer']
Here is how I hack the yeomen generated source to change it to use the less version of bootstrap:
# install less and less grunt task:
sudo npm install -g less
npm install --save-dev grunt-contrib-less
# create app/styles/main.less file:
@import "../bower_components/bootstrap/less/bootstrap.less";
@import "inputs.less";
@import "../bower_components/bootstrap/less/utilities.less";
# rename the old app/styles/main.css to app/styles/inputs.less:
mv app/styles/main.css app/styles/inputs.less
# create less grunt tasks in Gruntfile.js:
// build our css
less: {
dev: {
files: {
"<%= yeoman.app %>/styles/main.css": "<%= yeoman.app %>/styles/main.less"
}
},
dist: {
options: {
cleancss: true
},
files: {
"<%= yeoman.dist %>/styles/main.css": "<%= yeoman.app %>/styles/main.less"
}
}
},
# add 'less' file extension to files in the styles section of the watch grunt task:
# add 'less:dev' to tasks in the styles section of the watch grunt task:
# also, add 'less:dev' to the debug and serve grunt tasks, and 'less:dist' to the build grunt task
# add exclude entry to the bower-install grunt task to prevent it from being injected into index.html:
exclude: [ 'bower_components/bootstrap/dist/css/bootstrap.css' ]
exclude: [ 'bower_components/bootstrap/dist/css/bootstrap.css' ]
# add generated css to .gitignore:
echo app/styles/main.css >> .gitignore
echo app/styles/main.css >> .gitignore
Thursday, 1 May 2014
Solving SFINAE issues when you have overlapping conditions
Sometimes we have function templates which we want to use SFINAE on, but some of them have overlapping conditions, creating ambiguity
template<unsigned N, enable_if_t<is_multiple_of<N, 3>>...>
void print_fizzbuzz(){ std::cout << "fizz\n"; }
template<unsigned N, enable_if_t<is_multiple_of<N, 5>>...>
void print_fizzbuzz(){ std::cout << "buzz\n"; }
template<unsigned N, enable_if_t<is_multiple_of<N, 15>>...> // this is ambiguous
void print_fizzbuzz(){ std::cout << "fizzbuzz\n"; }
By using derived-to-base conversions we can create a total ordering for selecting SFINAE overloads.
That is, we resolve ambiguity by using the following inheritance hierarchy:
template<unsigned I> struct choice : choice<I+1>{};
template<class C, class T = int>
using enable_if_t = typename std::enable_if<C::value, T>::type;
template<int N, int M>
struct is_multiple_of : std::integral_constant<bool, N % M == 0>{};
//-------------------------------
template<unsigned I> struct choice : choice<I+1>{};
template<> struct choice<10>{}; // suitably high terminating condition
struct otherwise{ otherwise(...){} };
struct select_overload : choice<0>{};
//-------------------------------
template<unsigned N, enable_if_t< is_multiple_of<N, 15> >...>
void print_fizzbuzz(choice<0>) { std::cout << "fizzbuzz\n"; }
template<unsigned N, enable_if_t< is_multiple_of<N, 3> >...>
void print_fizzbuzz(choice<1>) { std::cout << "fizz\n"; }
template<unsigned N, enable_if_t< is_multiple_of<N, 5> >...>
void print_fizzbuzz(choice<2>) { std::cout << "buzz\n"; }
template<unsigned N>
void print_fizzbuzz(otherwise){ std::cout << N << "\n"; }
template<unsigned N = 1>
void do_fizzbuzz()
{
print_fizzbuzz<N>(select_overload{});
do_fizzbuzz<N+1>();
}
template<>
void do_fizzbuzz<50>()
{
print_fizzbuzz<50>(select_overload{});
}
//-------------------------------
int main()
{
do_fizzbuzz();
}
template<unsigned N, enable_if_t<is_multiple_of<N, 3>>...>
void print_fizzbuzz(){ std::cout << "fizz\n"; }
template<unsigned N, enable_if_t<is_multiple_of<N, 5>>...>
void print_fizzbuzz(){ std::cout << "buzz\n"; }
template<unsigned N, enable_if_t<is_multiple_of<N, 15>>...> // this is ambiguous
void print_fizzbuzz(){ std::cout << "fizzbuzz\n"; }
By using derived-to-base conversions we can create a total ordering for selecting SFINAE overloads.
That is, we resolve ambiguity by using the following inheritance hierarchy:
template<unsigned I> struct choice : choice<I+1>{};
choice<0> has a higher ordering than choice<1>, and we can therefore use choice<0> as a function parameter to make is_multiple_of<N, 15> a better overload, thereby resolving the ambiguity.
The complete fizzbuzz example:
#include <type_traits>
#include <iostream>template<class C, class T = int>
using enable_if_t = typename std::enable_if<C::value, T>::type;
template<int N, int M>
struct is_multiple_of : std::integral_constant<bool, N % M == 0>{};
//-------------------------------
template<unsigned I> struct choice : choice<I+1>{};
template<> struct choice<10>{}; // suitably high terminating condition
struct otherwise{ otherwise(...){} };
struct select_overload : choice<0>{};
//-------------------------------
template<unsigned N, enable_if_t< is_multiple_of<N, 15> >...>
void print_fizzbuzz(choice<0>) { std::cout << "fizzbuzz\n"; }
template<unsigned N, enable_if_t< is_multiple_of<N, 3> >...>
void print_fizzbuzz(choice<1>) { std::cout << "fizz\n"; }
template<unsigned N, enable_if_t< is_multiple_of<N, 5> >...>
void print_fizzbuzz(choice<2>) { std::cout << "buzz\n"; }
template<unsigned N>
void print_fizzbuzz(otherwise){ std::cout << N << "\n"; }
template<unsigned N = 1>
void do_fizzbuzz()
{
print_fizzbuzz<N>(select_overload{});
do_fizzbuzz<N+1>();
}
template<>
void do_fizzbuzz<50>()
{
print_fizzbuzz<50>(select_overload{});
}
//-------------------------------
int main()
{
do_fizzbuzz();
}
This excellent technique by Xeo, as described here
Subscribe to:
Posts (Atom)