Wednesday 24 July 2013

Parsing socket / cpu / hyperthreading information from /proc/cpuinfo

#!/bin/bash

# total number of sockets
NUM_SOCKETS=`grep physical\ id /proc/cpuinfo | sort -u | wc -l`

# total number of cores per socket
NUM_CORES=`grep cpu\ cores /proc/cpuinfo | sort -u | awk '{print $4}'`

# total number of physical cpus (cores per socket * number of sockets)
NUM_PHYSICAL_CPUS=$[NUM_SOCKETS * ${NUM_CORES}]

echo "${NUM_SOCKETS} sockets"
echo "${NUM_CORES} cores per socket"
echo "${NUM_PHYSICAL_CPUS} physical processors"

# Work out if hyperthreading is enabled. 
#   This is done by working out how many siblings each core has
#   If it's not the same as the number of cores per socket then 
#     hyperthreading must be on
NUM_SIBLINGS=`grep siblings /proc/cpuinfo | sort -u | awk '{print $3}'`
echo "$NUM_SIBLINGS siblings"
if [ ${NUM_SIBLINGS} -ne ${NUM_CORES} ]
then
    # total number of local cpus (ie: physical cpus + hyperthreading cpus)
    NUM_LOGICAL_CPUS=`grep processor /proc/cpuinfo | sort -u | wc -l`
    echo "hyperthreading is enabled - ${NUM_LOGICAL_CPUS} logical processors"
fi

# display which socket each core is on
echo "Sockets: Cores"
cat /proc/cpuinfo | egrep "physical id|processor" | tr \\n ' ' | sed 's/processor/\nprocessor/g' | grep -v ^$ | awk '{printf "%d: %02d\n",$7,$3}' | sort

Notes on performance counters and profiling with PAPI

Performance counters

2 main types of profiling applications with performance counters: aggregate (direct) and statistical (indirect).

  • Aggregate: Involves reading the counters before and after the execution of a region of code and recording the difference. This usage model permits explicit, highly accurate, fine-grained measurements. There are two sub-cases of aggregate counter usage: Summation of the data from multiple executions of an instrumented location, and trace generation, where the counter values are recorded for every execution of the instrumentation.
  • Statistical: The PM hardware is set to generate an interrupt when a performance counter reaches a preset value. This interrupt carries with it important contextual information about the state of the processor at the time of the event. Specifically, it includes the program counter (PC), the text address at which the interrupt occurred. By populating a histogram with this data, users obtain a probabilistic distribution of PM interrupt events across the address space of the application. This kind of profiling facilitates a good high-level understanding of where and why the bottlenecks are occurring. For instance, the questions, "What code is responsible for most of the cache misses?" and "Where is the branch prediction hardware performing poorly?" can quickly be answered by generating a statistical profile.

PAPI supports two types of events, preset and native. 
  • Preset events have a symbolic name associated with them that is the same for every processor supported by PAPI. 
  • Native events, on the other hand, provide a means to access every possible event on a particular platform, regardless of there being a predefined PAPI event name for it
PAPI supports measurements per-thread; that is, each measurement only contains counts generated by the thread performing the PAPI calls

int events[2] = { PAPI_L1_DCM, PAPI_FP_OPS }; // L1 data cache misses; hardware flops
long_long values[2];

PAPI_start_counters(events, 2);
// do work
PAPI_read_counters(values, 2);


Taken from a Dr Dobbs article


Monday 22 July 2013

Voluntary/involuntary context switches

$ cat /prod/$PID/status

Voluntary context switches are when your application is blocked in a system call and the kernel decide to give it's time slice to another process.

Non voluntary context switches are when your application has used the entire timeslice the scheduler has attributed to it