Update column based on search value

update data set tags = "linux" WHERE data LIKE '% yum %'
Tags: mysql

Linux malware

Tags: linux

Prefer IPv4 over IPv6

Precedence blocks in /etc/gai.conf. Locate this line: #precedence ::ffff:0:0/96 100 And un-comment it so it looks like this: precedence ::ffff:0:0/96 100 Then save the file
Tags: linux

Find high inode files

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

Display all post values

Tags: code,php

Opencloud php library object upload tranaction IDs

require 'vendor/autoload.php';
use OpenCloud\Rackspace;

$client = new Rackspace(Rackspace::US_IDENTITY_ENDPOINT, array(
'username' => '{UserName}',
'apiKey' => '{API_KEY}'
$filename ="{filename}";

$objectStoreService = $client->objectStoreService(null, $DC);
$container = $objectStoreService->getContainer($containername);
try {
$fileData = fopen($filename, 'r');
$container ->uploadObject($filename, $fileData);
$timestamp = $container->getMetadata()->getProperty('timestamp');
$transaction_id = $container->getMetadata()->getProperty('trans-id');

echo "======================================Results================================";
echo "DC: ".$DC."\n";
echo "Container: ".$containername."\n";
echo "Object Name: ".$filename."\n";
echo "Timestamp: ".$timestamp."\n";
echo "Transaction ID: ".$transaction_id."\n";
echo "============================================================================";

} catch (Exception $e) {
echo "Something Happened: ".$e;
Tags: code,php

PHP timezone offset

$date="2016-05-27 12:00:00";
$user_tz = "America/Chicago";
function timezoneoffset($timezone){
date_default_timezone_set( "UTC" );
$timecst= new DateTimeZone($timezone);
$timeOffset = timezone_offset_get(timezone_open($timezone), new DateTime());
return $timeOffset;
function offset($time,$timeOffset){
$time_new = strtotime($time);
$time_new = $time_new + ($timeOffset);
$newdate = date('Y-m-d H:i:s', $time_new);
return $newdate;
echo $date."\n";
$timeOffset = timezoneoffset($user_tz);
echo offset($date, $timeOffset);

Force Delete Cloud Files Objects

curl -sX GET -H "X-Auth-Token: $token" $url/$CONTIANER >> temp.tmp
for i in $(cat temp.tmp); do curl -sX PUT -H "X-Auth-Token: $token" -H "Content-Length: 0" $url/$CONTIANER/$i ;done
for i in $(cat temp.tmp); do curl -sX DELETE -H "X-Auth-Token: $token" -H "Content-Length: 0" $url/$CONTIANER/$i ;done
Tags: rackspace

Windows driveclient

Tags: rackspace

Openssl update

yum update
cd /usr/src
wget /www.openssl.org/source/openssl-1.0.1s.tar.gz
tar -zxf openssl-1.0.1s.tar.gz
cd openssl-1.0.1s/
make depend
yum install gcc openssl-devel
make depend
make test
make install
mv /usr/bin/openssl /root/
ln -s /usr/local/ssl/bin/openssl /usr/bin/openssl
openssl version
Tags: linux

CloudFiles TempURL Code

require 'vendor/autoload.php';
use OpenCloud\Rackspace;
/Replace {USERNAME} and {APIKEY} with account api key and username
$client = new Rackspace('/identity.api.rackspacecloud.com/v2.0/', array(
'username' => '{USERNAME}',
'apiKey' => '{APIKEY}',
$service = $client->identityService();
/replace {DC} with the datacenter
$objectStoreService = $client->objectStoreService(null, '{DC}');
/Replace {CONTAINER} with the container you want to use
$container = $objectStoreService->getContainer('{CONTAINER}');
/Replace Objectname
$object = $container->getPartialObject('{OBJECT}');
$account = $objectStoreService->getAccount();
$expirationTime = 3600;
$httpMethod = 'GET';
$tempUrl = $object->getTemporaryUrl($expirationTime, $httpMethod);
/Replace {SECRECT KEY} with your made up Secret Key
$account->setTempUrlSecret('{SECRET KEY}');
$object->getTemporaryUrl($expirationTime, $httpMethod);
/Replace {FILENAME} you would like to url to use
$url = $tempUrl.'&filename='.' {FILENAME} ';
echo $url;
} catch (\Guzzle\Http\Exception\BadResponseException $e) {
echo $e->getRequest(), "\n\n", $e->getResponse();
Tags: rackspace

PHP PDO insert call

$db = new PDO('mysql:host=HOST;dbname=DBNAME;charset=utf8', 'USER', 'PASSWORD');
$q = $db->prepare("DESCRIBE user");
$table_fields = $q->fetchAll(PDO::FETCH_COLUMN);
$arr = array('bob','email','address','city');
$DBValues = ":".implode(",:", $table_fields);
$DBCols = implode(",", $table_fields);
$DBData = array(':id'=>null);;
for($i=1; $i< count($table_fields); $i++){
$DBData[":$table_fields[$i]"] = $arr[$i-1];
$stmt = $db->prepare("INSERT INTO user($DBCols)VALUES($DBValues)");
$affected_rows = $stmt->rowCount();
echo $affected_rows."\n";
}catch(PDOException $e){
echo 'Connection failed: ' . $e->getMessage();

Bash Lessons

Lesson 1:

1. Create script that echo out Hello !

a. where is command line argument

2. write a script using a if, else to check if commnad line argument is red or blue

3. edit first script to take two arguments

a. where the second augment is special message

4create script where the responses changes based on the name entered

5. create a script that uses two variables like name and favorite color

6. create that saves the command line arguments to a file

7. read variables from the file in lesson 1.6 and echo them to the command line

Lesson 2:

1. write a script to rename a file and add the current date to the file

2. write a script to create # of files based on command line arguments

a. Create a script based on lesson 2.2 that creates # files names 1, 2, and 3

b. change the ownership on the files to from a command line argument for username and group staff

Lesson 3:

1. write a script that writes the output of a tcpdump -i eth0 -c 10 to a file networkoutput.txt

2. write a script that sorts the data and make list of only source IP addresses based on information in networkoutput.txt and saves it to sortedIPs.txt

3. write a nested if statement that takes the sortedIPs.txt that counts the number of times an IP shows in the log networkoutput.txt
Tags: code,bash

Basic MySQLi connection string

$dbuser = "";
$database = "";
$dbpassword = "";
$dbhost = "";
$dbtable ="";
$mysqli = new mysqli($dbhost, $dbuser, $dbpassword, $database);

/* check connection */
if ($mysqli->connect_errno) {
printf("Connect failed: %s\n", $mysqli->connect_error);
if ($result = $mysqli->query("select * from $dbtable where user_id=1;")) {
printf("Select returned %d rows.\n", $result->num_rows);
/* free result set */
Tags: mysql

List Cloud Monitoring checks and their status

for i in $(curl -sX GET -H "X-Auth-Token: $token" $url/entities | grep "\"id\":" | sed 's/["id:,]/g'); do curl -sX GET -H "X-Auth-Token: $token" $url/views/overview | grep -A 5 $i; done

Curl webpage

sudo apt-get install curl html2text
curl -A Mozilla /www.google.com/search?q=curl | html2text -width 70
Tags: linux

Log SSH sessions

ssh @ | |tee -a logfile
$ ping api.drivesrvr.com -c3
$ ping rse.drivesrvr.com -c3
$ ping snet-storage101.dfw1.clouddrive.com -c

msiexec /i driveclient-latest.msi /qn /l*v %tmp%\install-driveclient-latest.log APIUSER= APIKEY= APIHOSTNAME=region.backup.api.rackspacecloud.com DATACENTER=IAD DEBUGHIGH=true

Check Cloudfiles Headers

for i in $(curl -sX GET -H "X-Auth-Token: $token" $url | grep ; do for k in $(curl -isX GET -H "X-Auth-Token: $token" $url/$i); do curl -isX GET -H "X-Auth-Token: $token" $url/$k;done; done
Tags: fagoty

U.S. Postal Service abbreviations

Apartment: APT
Avenue: AVE
Beach: BCH
Boulevard: BLVD
Building: BLDG
Canyon: CYN
Center: CTR
Circle: CIR
Court: CT
Crescent: CRES
Crossing: XING
Department: DEPT
Drive: DR
Expressway: EXPY
Falls: FLS
Field: FLD
Floor: FL
Fort: FT
Gardens: GDNS
Harbor: HBR
Heights: HTS
Highway: HWY
Hills: HLS
Island: IS
Junction: JCT
Lake: LK
Landing: LDG
Lane: LN
Lodge: LDG
Mount: MT
Mountain: MTN
Office: OFC
Parkway: PKWY
Penthouse: PH
Plaza: PLZ
Point: PT
Post Office Box: P.O. Box
Road: RD
Room: RM
Route: RTE
Square: SQ
Station: STA
Street: ST
Suite: STE
Terrace: TER
Turnpike: TPKE
Valley: VLY

LBass access log parsher

printf '=%.0s' {1..100};zcat $log | awk '{SUM += $12} END {print $1 "_" SUM}' | awk -F '_' '{print "\nDDI: " $1 "\nLoad Balancer ID: "$2 "\nTotal Data Usage: " $3}';printf '*%.0s' {1..100}; printf "\nHTTPcode HitCount ClientIP Objects\n"; zcat $log | awk '{print $11 " " $3 " " $9}' | sort | uniq -c | awk '{printf "%-15s %-8s %-20s %-20s\n",$2,$1,$3,$4}'; printf '*%.0s' {1..100}; zcat $log | awk '{if($11 == "200"){SUM200 += $12}} END {print "\n200 HTTP Responce Code: \nData Usage: " SUM200}'; printf "%-20s %-15s %-5s\n" "LB Node:" "HitCount:" "Objects:"; zcat $log | awk '{print $11 " " $9 " " $15}' | sort | grep 2[0-9][0-9] | uniq -c | awk '{printf "%-22s %-17s %-5s\n",$4,$1,$3}'; printf '*%.0s' {1..100}; zcat $log | awk '{if($11 == "403"){SUM403 += $12}} END {print "\n403 HTTP Responce Code: \nData Usage: " SUM403}'; printf "%-20s %-15s %-5s\n" "LB Node:" "HitCount:" "Objects:"; zcat $log | awk '{print $11 " " $9 " " $15}' | sort | grep 2[0-9][0-9] | uniq -c | awk '{printf "%-22s %-17s %-5s\n",$4,$1,$3}'; printf '*%.0s' {1..100}; zcat $log | awk '{if($11 == "404"){SUM404 += $12}} END {print "\n404 HTTP Responce Code: \nData Usage: " SUM404}'; printf "%-20s %-15s %-5s\n" "LB Node:" "HitCount:" "Objects:"; zcat $log | awk '{print $11 " " $9 " " $15}' | sort | grep 2[0-9][0-9] | uniq -c | awk '{printf "%-22s %-17s %-5s\n",$4,$1,$3}'; printf '=%.0s' {1..100};printf "\n"; printf "Top Requesters: \n#Requests: IP Address:\n"; zcat $log | awk '{print $3}'| sort -k1,1n | uniq -c | awk '{printf "%-13s %s\n", $1, $2}'

Sample Output:

Load Balancer ID:
Total Data Usage:
HTTPcode HitCount ClientIP Objects
200 60 xx.xx.xx.xx /
200 1 xx.xx.xx.xx /pic.html
200 60 xx.xx.xx.xx /
403 61 xx.xx.xx.xx /pow.html
403 60 xx.xx.xx.xx /pow.html
404 1 xx.xx.xx.xx /myadmin/scripts/setup.php
404 1 xx.xx.xx.xx /MyAdmin/scripts/setup.php
404 1 xx.xx.xx.xx /phpmyadmin/scripts/setup.php
404 1 xx.xx.xx.xx /phpMyAdmin/scripts/setup.php
404 1 xx.xx.xx.xx /pma/scripts/setup.php
404 1 xx.xx.xx.xx /w00tw00t.at.blackhats.romanian.anti-sec:)
404 61 xx.xx.xx.xx /testing.html
404 60 xx.xx.xx.xx /testing.html
200 HTTP Responce Code:
Data Usage: 47948
LB Node: HitCount: Objects:
xx.xx.xx.xx:80 62 /
xx.xx.xx.xx:80 58 /
xx.xx.xx.xx:80 1 /pic.html
xx.xx.xx.xx:80 65 /pow.html
xx.xx.xx.xx:80 1 /MyAdmin/scripts/setup.php
xx.xx.xx.xx:80 61 /testing.html
403 HTTP Responce Code:
Data Usage: 57354
LB Node: HitCount: Objects:
xx.xx.xx.xx:80 62 /
xx.xx.xx.xx:80 58 /
xx.xx.xx.xx:80 1 /pic.html
xx.xx.xx.xx:80 65 /pow.html
xx.xx.xx.xx:80 1 /MyAdmin/scripts/setup.php
xx.xx.xx.xx:80 61 /testing.html
404 HTTP Responce Code:
Data Usage: 60420
LB Node: HitCount: Objects:
xx.xx.xx.xx:80 62 /
xx.xx.xx.xx:80 58 /
xx.xx.x.xx:80 1 /pic.html
xx.xx.xx.xx:80 65 /pow.html
xx.xx.xx.xx:80 1 /MyAdmin/scripts/setup.php
xx.xx.xx.xx:80 61 /testing.html
Top Requesters:
#Requests: IP Address:
183 xx.xx.xx.xx
180 xx.xx.xx.xx
6 xx.xx.xx.xx

LBaaS Log Format

HTTPS/Everything else:

"%v %t %h %A:%p %n %B %b %T"

"Virtual server name", "Current time", "The client's IP address", "The IP address that the client connected to, Port number that the client connected to", "Node that was used for the connection", "Number of bytes received from the client", "Number of bytes sent to the client", "Response time of the node used, in seconds"

%v %{Host}i %h %l %u %t "%r" %s %b "%{Referer}i" "%{User-Agent}i"

"Virtual server name", "The value of a named header in the HTTP request ", "The client's IP address", "The remote logname. This is for compatibility only and will always return - ", "Remote user - the username with HTTP basic authentication", "First line of the HTTP request", "Status code of HTTP response (e.g. 200)", "Number of bytes sent to the client", "The value of a named header in the HTTP request", "The value of a named header in the HTTP request"

Get service catalog by DC and Token

Get Service catalog URLs:
curl -s -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials":{ "username": "$username", "apiKey": "$apikey"} } }' -H 'Content-Type: application/json' '/identity.api.rackspacecloud.com/v2.0/tokens' | python -m json.tool | grep publicURL | awk '{print $2}'| sed 's/[",]/g' | grep $DC

Get Token:
curl -s -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials":{ "username": "$username", "apiKey": "$apikey"} } }' -H 'Content-Type: application/json' '/identity.api.rackspacecloud.com/v2.0/tokens' | python -m json.tool | grep -A 5 token | grep id | awk '{print $2}'| sed 's/[",]/g'
Tags: linux


for i in $(curl -sX GET -H "X-Auth-Token: $token" $cfed/$container/); do curl -sH curl -sH "X-Auth-Token: $token" $cfed/$container/$i -o /dev/null -D - -w "%{url_effective}\n" | grep HTTP;done

Show the output of Cloud Files api call: Container name, Object-Count, Container object count, Container size(convert to TiB,GiB,MiB,KiB):


for i in $(curl -sH "X-Auth-Token: $token" ${cfEP});do echo $i;curl -sH "X-AUth-Token: $token" ${cfEP}/${i} -XHEAD -D -|grep 'X-Container';done | sed 's/X-Container-/g' | sed '/^Meta/ d' | awk '{if ($1 == "Bytes-Used:" && $2 >1024*1024*1024*1024) print "Size: " $2/1024/1024/1024/1024"TiB";else if ($1 == "Bytes-Used:" && $2>1024*1024*1024) print "Size: " $2/1024/1024/1024"GiB";else if ($1 == "Bytes-Used:" && $2 >1024*1024) print "Size: " $2/1024/1024"MiB";else if($1 == "Bytes-Used:" && $2 >1024) print "Size: " $2/1024"KiB";else if ($1 == "Bytes-Used:" && $2 < 1024) print "Size:" $2; print $1 $2}'| sed '/Bytes-Used:/ d'; date




Size: 2.89293MiB



Size: 5.98451MiB



Size: 107.427KiB



To pull data for a single container

$cont=container name

curl -sH "X-AUth-Token: $token" ${cfEP}/$cont -XHEAD -D -|grep 'X-Container' | sed 's/X-Container-/g' | sed '/^Meta/ d' | awk '{if ($1 == "Bytes-Used:" && $2 >1024*1024*1024*1024) print "Size: " $2/1024/1024/1024/1024"TiB";else if ($1 == "Bytes-Used:" && $2>1024*1024*1024) print "Size: " $2/1024/1024/1024"GiB";else if ($1 == "Bytes-Used:" && $2 >1024*1024) print "Size: " $2/1024/1024"MiB";else if($1 == "Bytes-Used:" && $2 >1024) print "Size: " $2/1024"KiB";else if ($1 == "Bytes-Used:" && $2 < 1024) print "Size:" $2; print $1 $2}'| sed '/Bytes-Used:/ d'; date

Remote run commands

ssh root@ 'reboot'

ssh [user]@[server] '[command 1]; [command 2]; [command 3]'

Executing a Local Script on a Remote Linux Server

$ ssh [user]@[server] 'bash -s' < [local_script]

Execute the Local Script 'local_script.sh' on the Remote Machine

$ ssh root@ 'bash -s' < local_script.sh
Tags: linux


if (req.http.host ~ "(www\.)?\.com"){
excludes domain name from varnish


awk '{print $1,$2,$3,$6,$7}' report

Fake a http error code

Here?s some configuration you can drop in an apache conf to generate a simple default 500:

Redirect 500 /error500

Accessing /yourserver/error500, or /error500/more/path, or /error500/more/path?with=query, all will return a 500 response with apache?s default 500 body.
Tags: linux

Rackspace Restful API

How to make Rackspace API calls to interact with Rackspace services.

First you need to get the Rackspace Token:

You will need curl installed on the computer you are going to making the calls from and you will need your username and API key for your account.

RestfulAPI http verbs:








curl -s -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials":{ "username": "", "apiKey": ""} } }' -H 'Content-Type: application/json' '/identity.api.rackspacecloud.com/v2.0/tokens' | python -m json.tool

This will return the Rackspace Token for your account and the service end-points for the Rackspace API services.

Example of uploading a file to Cloud Files:

curl -X PUT -T "" -H "X-Auth-Token: " //

This is done via PUT call to the API. You will need the ,,,, and

is the name of the file to be uploaded

: Rackspace account token

Cloud Files endpoint from the service catalog. gotten by authincating to /identity.api.rackspacecloud.com

Name of the container that the object will placed in Cloud Files

The name of the object as it will be listed in Cloud Files once it is uploaded

List Contianers:

curl -X GET -H "X-Auth-Token: " //

List objects:

curl -X GET -H "X-Auth-Token: "

Cloud Servers:

List Servers:

curl -X GET -H "X-Auth-Token: " /servers

List Server Details:

curl -X GET -H "X-Auth-Token: " /servers/

tcp offloading

$ ethtool --offload eth0 rx off tx off
$ ethtool -K eth0 gso off
Tags: linux

Powershell script

Import-module servermanager ; Get-WindowsFeature | Where-Object {$_.Installed -match ?True?} | Select-Object -Property Name | Out-File C:\Users\Administrator\Desktop\report.txt
Get-ItemProperty HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* | Select-Object DisplayName, DisplayVersion, Publisher, InstallDate | Format-Table ?AutoSize | Out-File C:\Users\Administrator\Desktop\report.txt -Append
ipconfig /all | Out-File C:\Users\Administrator\Desktop\report.txt -Append
route PRINT | Out-File C:\Users\Administrator\Desktop\report.txt -Append
Get-WmiObject -Class Win32_NetworkAdapterConfiguration -Filter IPEnabled=TRUE ?ComputerName . | Out-File C:\Users\Administrator\Desktop\report.txt -Append
netsh advfirewall firewall show rule name=all | Out-File C:\Users\Administrator\Desktop\report.txt -Append

Navigate Windows Explorer More Quickly with These Keyboard Shortcuts

Ctrl+N Open a new window on the same folder.

Ctrl+W Close the current window.

Alt+Up Arrow Go up one level.

Alt+Right Arrow Go forward.

Alt+Left Arrow Go back.

Alt+D Move the focus to the address bar, and select the current path.

F4 Move the insertion point to the address bar, and display the contents of the drop-down list of previous addresses.

Alt+Enter Show properties of the selected file.

Shift+F10 Open the shortcut menu for the current selection (which is the same as a right-click).

F6 Cycle through the following elements: address bar, toolbar, navigation pane, file list, column headings (available in Details view only).

Tab Cycle through the following elements: address bar, search box, toolbar, navigation pane, file list, column headings (available in Details view only).

F11 Toggle full-screen mode.

Ctrl+Shift+N Create a new subfolder in the current folder.

Ctrl+Shift+E Expand navigation pane to the current folder.
Tags: windows

PHP get X-Forwarded-For

Tags: code,php

Grep flags

Grep tricks:
-A shows # of lines after found line
-B shows # of lines before found line
-c shows # of lines before and after found line
--color show find item in color

NATO Numbers

primary -> First
secondary -> Second
Ternary -> Third
Quaternary -> Fourth
Quinary -> Fifth
Senary -> Sixth
Septenary -> Seventh
Octary -> Eighth
Nonary -> Ninth

Keyboard commands

Moving Around Quickly and Editing Quickly:

Basic editing on the command line involves moving around with the arrow keys and deleting characters with Backspace or Delete. When there are more than only a few characters to move or delete, using these basic keys is just too slow. You can do the same much faster by knowing just a handful of interesting shortcuts:

Ctrl-w: cut text backward until space.

Esc-Backspace: cut one word backward.

Esc-Delete: cut one word forward.

Ctrl-k: cut from current position until the end of the line.

Ctrl-y: paste the most recently cut text.

Not only is it faster to delete portions of a line chunk by chunk like this, but an added bonus is that text deleted this way is saved in a register so that you can paste it later if needed. Take, for example, the following sequence of commands:

git init --bare /path/to/repo.git
git remote add origin /path/to/repo.git

Notice that the second command uses the same path at the end. Instead of typing that path twice, you could copy and paste it from the first command, using this sequence of keystrokes:

Press the up arrow to bring back the previous command.

Press Ctrl-w to cut the path part: "/path/to/repo.git".

Press Ctrl-c to cancel the current command.

Type git remote add origin, and press Ctrl-y to paste the path.

Some of the editing shortcuts are more useful in combination with moving shortcuts:

Ctrl-a: jump to the beginning of the line.

Ctrl-e: jump to the end of the line.

Esc-b: jump one word backward.

Esc-f: jump one word forward.

Jumping to the beginning is very useful if you mistype the first words of a long command. You can jump to the beginning much faster than with the left-arrow key.

Jumping forward and backward is very practical when editing the middle part of a long command, such as the middle of long path segments.
Tags: linux

SSL Best Practices

Tags: linux

Windows Best Practices

1. Perform administration tasks using the least level of privileges. Avoid using administrator privileges when possible. Ensure that all accounts with administrator rights are protected by strong passwords enforced though password policies and the passwords are changed on a regular schedule.
2. Administrator accounts policy: Each administrator should have their own account so that changes can be tracked.
Windows 2008:
Open System Configuration by clicking the Start button, clicking Control Panel, clicking System and Security, clicking Administrative Tools, and then double-clicking System Configuration Administrator permission required If you're prompted for an administrator password or confirmation, type the password or provide confirmation.
Windows 2012:
Open Server Manager. Click Tool in the right corner of Server Manager and then select Computer Management. In the Computer Management windows, expand Local Users and Groups and select Groups. Double click on Administrators group. In the Administrators Properties, click Add...In the Select Users, Computers, Service Accounts, or Groups windows, type the account you want to add to Local Administrator group and then click OK. Click OK.
3. Local Security Policies, allows the server to have policies that control the minimum password length, the Maximum password age (the default is 42 days). It is generally advisable to change the passwords for accounts on the server on a regular basis to keep the password from being brute forced. For the list of the Microsoft password complexity please see: /technet.microsoft.com/en-us/library/cc786468%28v=ws.10%29.aspx
4. Firewall configuration: To help protect the server the firewall should be configured to only open access to the services that need to be accessible.
Limit access to the Remote Desktop port 3389 to the IP address or IP address range, this will limit the number of IP addresses that are able to try to connect to the server via Remote Desktop. This is also true of the Database server access should be limited to only the system that need access.
Click Start -> Administrator Tools -> and then Windows Firewall with Advanced Firewall. Then select the rule that you would like to edit. You can either double-click or single to select the rule then click properties both will open the properties window. Next click the scope tab then click radio button under Remote IP address These IP addresses. Then click Add? you can specify the IP address or IP network that should be allowed to have access to the port.

5. Practice Isolation of Services: When planning a windows environment build out try to have a server for each part of the operation, i.e. web server, database, app server, etc. This way if a server crashes or the server is compromised it does not bring down the entire operation.
6. Plan security patches and planning Operating System upgrade process: The first step in bring a new server online is to make sure that all security updates are applied to the server.

Create a schedule for applying patches so that patches are applied during a period of low traffic to the server.

Also you should have a plan in place to upgrade the version of Windows Server that your application or web site is running on before the current version reaches End of Life. Microsoft will End of Life an operating system based on their lifecycle. Please see /windows.microsoft.com/en-us/windows/lifecycle for Microsoft?s current lifecycle. Once they End of Life a operating system they stop stop releasing patches for the operating system. Once a server is not able to be patched it becomes a risk to the applications that are running on the server. You will want to plan ahead and have a plan in place to allow the migration of the site, app, or services to a new server running a newer operating system.
7. Development Server: A best practice is to never run untested code or software on production server instead it should be run on a development server. This allows code and software to be tested before it is migrated to the production environment to limit the risk of exposing a bug to the production environment.
8. Audit Software and configuration of the server: Take an audit the server and document what software and services are installed and running on the server. This allows a base line of software that is installed on the server so that in the event of that the server is compromised there is list of the software that was installed on the server was configured.
9. Auditing access to the servers:
Windows 2008:
Start -> All Programs -> Administrative Tools -> Local Security Policy. In the Local Security Policy tool, expand the Local Policies branch of the tree and select Audit Policy. A good place to start is to enable Audit account logon events and Audit privilege use. This will log both logon events and when permission as escalated.
Windows 2012:
Computer configuration -> Policies Windows Settings -> Security -> Settings -> Advanced -> Audit Policy Configuration. A good place to start is to enable Audit account logon events and Audit privilege use. This will log both logon events and when permission as escalated.
10. Limit access to file shares: Remove any unnecessary windows file shares and limit access to trusted systems. You should check what network shares are available on the server. Check what file permissions are configured on each file share to verify that they are not world readable. Also avoid granting access to the group users. A better way to handle this is the create new group and add the users that need access to the share to the new group, then grant access based on the new group.
11. Anti-virus: If files are going to be uploaded to the server from users and visitors to your site the files should be scanned by an anti-virus software to make sure that no malicious code is introduced to the server. Also make sure that anti-virus software is up to date.
12. Event Logs: Event logs should be checked on a regular bases as they will provide insight into any issue that might be effecting the server, also the security log will provide insight in into most security attempts on the server as will log failed login attempts. They can also be backed up and stored offsite or backed up using a backup tool.
13. Cloud Networks: Take advantage of Cloud Network to create isolated private networks between Cloud Servers in the same data center to further lock down the server. Additional resources:
Best Practices Analyzer: /technet.microsoft.com/en-us/library/dd759260.aspx
14. IIS: Create a backup of your site on regularly basis based on how often the files changes.
15. IIS limit permissions: Limit permissions granted to non-administrator accounts. Check to see if any folders have non-administrator write access or script execution permissions and if they do not need the access remove the permissions.
16. IIS SSL: Use SSL when using basic authentication for your site. If you use basic authentication without SSL clients can send their data in plain text which can be intercepted by malicious code.
17. IIS application Pool: Depending on your needs it is generally recommended that there is an isolated application pool per web site. You can create application pools from the UI or the command line.
1. From the IIS Manager, navigate to the Connections pane.
2. Choose the Application Pools option, and then choose Add Application Pool to open the Add Application Pool dialog box.
3. Enter a unique name for the application pool.
4. Choose the version, if any, of the Microsoft? .NET Framework for the application pool to use, and then choose your pipeline mode.
18. IIS Inetpub: Move the Inetpub from the system partition to a Data partition, this will save space on the System disk and it also creates a more secure server as the Inetpub folder is not part of the System partition.
19. IIS FTPS: IIS should be configured to use FTPS /www.iis.net/learn/publish/using-the-ftp-service/using-ftp-over-ss...
20. IIS SSL certificate: Store the SSL certificate outside of the Inetpub folder path also make sure that it is not world readable.
Tags: windows

Managing your Drupal project with Git

Maintaining a Drupal site with a change management system allows you to take advantage of a number of features that it has that are not available when managing the site based on just its files. Git allows you to manage the versioning of the site as you make changes and develop the site fully. It also allows you to set up development versions of the site allowing your developers to create development versions of the site and not work directly on the production site. This has a number of advantages over dealing with just a production site in that while you?re making changes to the development version you?re not actually affecting the production version of the code. Managing a Drupal site in this fashion allows you to leverage multiple developers to allow them to work on separate code bases from separate locations without having to make changes production environment.
Creating the git repository:
Creating the central repository
On the github.com you will want to setup account that private repositories as you do not want to make the code for your site available to the public. If you choose to use Github you can skip to the next part
Create user for git
$ adduser git

This will create a user named git that will be in charge of the git repositories for the site.
Install git on the remote server (Debian based distribution run apt-get update git && apt-get install git, Red Hat distribution yum update && yum install git)
Switch to the git user:
$ su git

Switch to the home directory for the git user:
$ cd ~

Make a directory to house the git repositories:
$ mkdir git

Move into the git directory:
$ cd git

Create the base project in with the git server:
$ git init -- bare [Project].git

Replace the [Project] with the name of the project of the domain name of the site depending if Drupal is running the full site or just the blog. This will create a directory for the domain under the git/[Project].git
Uploading the production site to git repository:
On the Web Server
Change to the path of the sites directory of the Drupal site
$ cd /path/to/website/sites/

Initialized the local Git repository:
$ git init

Creates the base readme.md

$ git add README.md

Add the base directory to the local git repository
$ git add .

Adds the directories from the base location to the project:
$ git commit ?m ?initial commit?

Adds the remote git project from the git server:
$ git remote add origin git@[git server address]:/home/git/[Project].git

Pushes the files in the local git repository to the git server repository:
$ git push origin master

This will create a new branch on the remote git server. From here the sites directly is now checked into the remote git repository so now if you need to check out the checkout the sites directory you can run a clone of the branch.
To test if the commit work you can switch back to the home directory for user:
$ cd ~
$ mkdir temp
$ git init
$ git remote add -t git@[git server address]:/home/git/[Project].git
(Replace [git server address] with the address of the server and [Project] repository name)
Or if using Github.com
git remote add github /github.com/[ Github USERNAME]/[Project].git
(Replace [USERNAME] with the Github username and [Project] repository name)
Once the project is pushed to the git repository server it can be used to create additional Drupal web nodes, create development environment, or to restore the Drupal site encase of data loss.


Install nload on a Debian or Ubuntu Linux

Type the following apt-get command:
$ sudo apt-get install nload

Simulate package update apt-get

To be a able to list packaged before you install them
apt-get -s upgrade

OPERATION WINDIGO: UNIX Servers Hijacked By Backdoor Trojan

OPERATION WINDIGO: Malware Used To Attack Over 500,000 Computers Daily After 25,000 UNIX Servers Hijacked By Backdoor Trojan

How To Tell If Your Server Has Fallen Foul Of Windigo

ESET researchers, who named Windigo after a mythical creature from Algonquian Native American folklore because of its cannibalistic nature, are appealing for Unix system administrators and webmasters to run the following command which will tell them if their server is compromised or not:

$ ssh -G 2>&1 | grep -e illegal -e unknown > /dev/null && echo ?System clean? || echo ?System infected?
Tags: code,bash

Level3 interactive map

Tags: data

PHP mail call

$Name = ""; /senders name
$email = ""; /senders e-mail adress
$recipient = ""; /recipient
$mail_body = "The text for the mail..."; /mail body
$subject = ""; /subject
$header = "From: ". $Name . " <" . $email . ">\r\n"; /optional header fields

mail($recipient, $subject, $mail_body, $header); /mail command
Tags: code,php

Great message ideas

1. Message has been sent. Don't believe me? Look above.

dovecot error message

Mar 27 11:38:20 host dovecot: imap-login: Time just moved backwards by 729 seconds. This might cause a lot of problems, so I'll just kill myself now.

HTTP response codes

100 Information
200 Success
300 Redirection
400 Client Error
500 Server Error
Tags: http

Reconfigure Debian Server Packages

dpkg-reconfigure -plow phpmyadmin
Tags: linux

E-Mail Etiquette

When you send email to someone, you are addressing another person directly as if you were calling them on the phone or writing them a letter. Thus the following should always be done when sending email:

Always include a short subject field in your mail message

Always include a salutation to the person to whom you are writing (i.e. Hi Joe, or Dear Frank, or Sue, or Dear Mx. Yin, etc.)

Always nicely space the text of your message

Never type more than about 70 characters per line of message text before hitting a return

Always sign your name after the text of your email message

Always include relevant information after your name (phone number, email address, postal address, organization, etc.)
This information would depend on how well you know the person, or what they might need to know to contact you, etc.

BEFORE you send your message, read over your message, correct spelling and/or grammar, and make sure that your message
clearly specifies what you are talking about and
provides all necessary information that the person will need in order to sensibly respond to you

Repair the Windows Network Stack

In Windows there is a small feature allowing you to repair a network connection. Go to the Network Connections options in Control panel (Control Panel > Network Connections), right click on the network connection you want and choose the repair option.

It is possible to run the same command by using the Netsh utility, with the following command line:
Before running this command please record your Network settings IP address, Subnet, and Gateway for public and the IP address, Subnet for Private and any other interfaces on the server.
netsh int ip reset c:\ network-connection.log
This is the output of netsh int ip reset c:\ network-connection.log
C:\Users\Administrator.WINDOWS2010>netsh int ip reset c:\ network-connection.log

Reseting Global, OK!
Reseting Interface, OK!
Reseting Unicast Address, OK!
Reseting Route, OK!
Restart the computer to complete this action.
c:\network-connection.log represents the address of the file in which the report will be stored.

The netsh int ip command allows you to reset the TCP/IP.

With Windows XP Service Pack 2, you can use:

netsh winsock reset catalog

This resets the socket which manages the TCP/IP. This can be used to handle network problems such as browser trouble, IP address related issues, etc.
Tags: windows

Setup DFS Replicated

Using Windows Server 2008, Cloud Servers, and Cloud Block Storage to create branch office DFS Replication groups to create central file data store for a multisite corporate environment.
DFS Replication is an efficient, multiple-master replication engine that you can use to keep folders synchronized between servers across limited bandwidth network connections.
DFS Replication uses a compression algorithm known as remote differential compression (RDC). RDC detects changes to the data in a file and enables DFS Replication to replicate only the
changed file blocks instead of the entire file this will save bandwidth.

To use DFS Replication, you first must create a replication groups and add replicated folders to the groups.
Creating multiple replicated folders in a single replication group simplifies the process of deploying replicated folders because the topology, schedule, and bandwidth throttling for the
replication group are applied to each replicated folder. To deploy additional replicated folders, you can use Dfsradmin.exe or follow the instructions in a wizard to define the local path
and permissions for the new replicated folder.

You will need to decide if you want to use a Full Mesh topology or a Hub and Spoke topology for your DFS network.

Full Mesh Every member of the replication group replicates with every other member of the group.
Hub and Spoke. Every hub member replicates with the hub member, and if desired you can add a second hub member for fault tolerance (the two hub members replicate with each other).

The full mesh: The full mesh topology is useful where all subnets have high speed connectivity and you are using DFS Namespaces together with DFS Replication to provide
fault-tolerant shared file resources to users.

The hub and spoke: The hub and spoke topology is great for networks of mixed speed and are particulary useful for enterprises that have large headquarters where the company's permanent IT staff are located and
multiple small branch offices with little or no on-site IT staff present. For such branch offices, one big concern is ensuring that reliable backups are done.

You can administer DFS Replication by using DFS Management, the DfsrAdmin and Dfsrdiag commands, or scripts that call WMI.

Create a Replication Group:
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, right-click the Replication node, and then click New Replication Group.
3. Follow the instructions in the New Replication Group Wizard.
For more information please visit /technet.microsoft.com/en-us/library/cc770925.aspx

Add a Replicated Folder to a Replication Group:
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Replication node, right-click a replication group, and then click New Replicated Folders.
3.Follow the instructions in the New Replicated Folders Wizard.
For more information please visit /technet.microsoft.com/en-us/library/cc770925.aspx

Add a Member to a Replication Group:
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Replication node, right-click a replication group, and then click New Member.
3. Follow the instructions in the New Member Wizard.
For more information please visit /technet.microsoft.com/en-us/library/cc770925.aspx

Create a Connection:
1. Click Start, point to Administrative Tools, and then click DFS Management.
2. In the console tree, under the Replication node, right-click the replication group that you want to create a new connection in, and then click New Connection.
3. Specify the sending and receiving members, and specify the schedule to use for the connection. At this point, replication is one-way.
4. Select Create a second connection in the opposite direction to create a second connection for two-way replication between the sending and receiving members. All members must have two-way connections.
Delegate the Ability to Manage DFS Replication
For more information please visit /technet.microsoft.com/en-us/library/cc770925.aspx

DFS Replication is a pull process not a push process. You will need to setup a time for the DFS Replication to update the data with the central DFS servers.

The best practice is to use a site-to-site VPN link anytime that data travels over the public internet link.
Tags: windows

Cloud Server with Local DFS

Create VPN network lnk:
Create file share on Cloud server
Limit access to the file share on the Cloud server
Setup VPN Link to the Cloud server from the Local server.
Setup DFS on local windows server
Configure Local caching on local server
Configure access to the file share for the local user base
Verify that users can access the DFS share
Setup backup of Cloud server with cloud image and cloud backup

Areo Snap ubuntu

What Is Aero Snap And Why Should I Use It?

For those who don?t know what Aero Snap is i shall try and sum it up in on sentence.
Aero Snap allows you to minimize, maximize and resize windows by simply drag-dropping them to the sides of the screen.

It?s useful for comparing the contents of two windows side-by-side.

For example you have two tabs in Google Chrome open but want to view the contents side-by-side rather than having to switch tabs. Easy. Peel of one of the tabs, drag it to the left ? BAM! Drag the second window to the right and ? BOOM! There they are: -

Drag them away and ta-da! they resize back.

Enable ?Aero Snap? In Ubuntu

You Will Need Compiz enabled and the following applications installed: -

sudo apt-get install compizconfig-settings-manager wmctrl

Now you?re all set to begin.

Open the Compiz Config Settings Manager (ALT+F2 ccsm, system > preferences > CompizConfig?, etc)
Select the ?Commands? option.
In ?Command Line 0′ paste: -

WIDTH=`xdpyinfo | grep ?dimensions:? | cut -f 2 -d ?:? | cut -f 1 -d ?x?` && HALF=$(($WIDTH/2)) && wmctrl -r :ACTIVE: -b add,maximized_vert && wmctrl -r :ACTIVE: -e 0,0,0,$HALF,-1

In ?Command Line 1′ paste: -

WIDTH=`xdpyinfo | grep ?dimensions:? | cut -f 2 -d ?:? | cut -f 1 -d ?x?` && HALF=$(($WIDTH/2)) && wmctrl -r :ACTIVE: -b add,maximized_vert && wmctrl -r :ACTIVE: -e 0,$HALF,0,$HALF,-1

And in ?Command Line 2′ paste: -

wmctrl -r :ACTIVE: -b add,maximized_vert,maximized_horz

It should now look something like this: -

Now choose the ?Edge Bindings? tab at the top and set the following: -

Run Command 0 ? Set To Left
Run Command 1 ? Set To Right
Run Command 2 ? Set To Top

Click on the back button and go to ?General options?.

Set the ?Edge Trigger Delay? to something around 400 ? 500 by dragging the slider to the right.

Now all you have to do is drag a window to one of the specified sides and your window will automatically resize.

Drupal clean url not working

From the sites folder run:
drush vset clean_url 0 --yes
Tags: drupal

Windows commands

Accessibility Controls access.cpl
Accessibility Wizard accwiz
Add Hardware Wizard hdwwiz.cpl
Add/Remove Programs appwiz.cpl
Administrative Tools control admintools
Adobe Acrobat (if installed) acrobat
Adobe Designer (if installed) formdesigner
Adobe Distiller (if installed) acrodist
Adobe ImageReady (if installed) imageready
Adobe Photoshop (if installed) photoshop
Automatic Updates wuaucpl.cpl
Bluetooth Transfer Wizard fsquirt
Calculator calc
Certificate Manager certmgr.msc
Character Map charmap
Check Disk Utility chkdsk
Clipboard Viewer clipbrd
Command Prompt cmd
Component Services dcomcnfg
Computer Management compmgmt.msc
Control Panel control
Date and Time Properties timedate.cpl
DDE Shares ddeshare
Device Manager devmgmt.msc
Direct X Control Panel (if installed)* directx.cpl
Direct X Troubleshooter dxdiag
Disk Cleanup Utility cleanmgr
Disk Defragment dfrg.msc
Disk Management diskmgmt.msc
Disk Partition Manager diskpart
Display Properties control desktop
Display Properties desk.cpl
Display Properties (w/Appearance Tab Preselected) control color
Dr. Watson System Troubleshooting Utility drwtsn32
Driver Verifier Utility verifier
Event Viewer eventvwr.msc
Files and Settings Transfer Tool migwiz
File Signature Verification Tool sigverif
Findfast findfast.cpl
Firefox (if installed) firefox
Folders Properties folders
Fonts control fonts
Fonts Folder fonts
Free Cell Card Game freecell
Game Controllers joy.cpl
Group Policy Editor (XP Prof) gpedit.msc
Hearts Card Game mshearts
Help and Support helpctr
HyperTerminal hypertrm
Iexpress Wizard iexpress
Indexing Service ciadv.msc
Internet Connection Wizard icwconn1
Internet Explorer iexplore
Internet Properties inetcpl.cpl
Internet Setup Wizard inetwiz
IP Configuration (Display Connection Configuration) ipconfig /all
IP Configuration (Display DNS Cache Contents) ipconfig /displaydns
IP Configuration (Delete DNS Cache Contents) ipconfig /flushdns
IP Configuration (Release All Connections) ipconfig /release
IP Configuration (Renew All Connections) ipconfig /renew
IP Configuration (Refreshes DHCP & Re-Registers DNS) ipconfig /registerdns
IP Configuration (Display DHCP Class ID) ipconfig /showclassid
IP Configuration (Modifies DHCP Class ID) ipconfig /setclassid
Java Control Panel (if installed) jpicpl32.cpl
Java Control Panel (if installed) javaws
Keyboard Properties control keyboard
Local Security Settings secpol.msc
Local Users and Groups lusrmgr.msc
Logs You Out Of Windows logoff
Malicious Software Removal Tool mrt
Microsoft Access (if installed) msaccess
Microsoft Chat winchat
Microsoft Excel (if installed) excel
Microsoft Frontpage (if installed) frontpg
Microsoft Movie Maker moviemk
Microsoft Paint mspaint
Microsoft Powerpoint (if installed) powerpnt
Microsoft Word (if installed) winword
Microsoft Syncronization Tool mobsync
Minesweeper Game winmine
Mouse Properties control mouse
Mouse Properties main.cpl
Nero (if installed) nero
Netmeeting conf
Network Connections control netconnections
Network Connections ncpa.cpl
Network Setup Wizard netsetup.cpl
Notepad notepad
Nview Desktop Manager (if installed) nvtuicpl.cpl
Object Packager packager
ODBC Data Source Administrator odbccp32.cpl
On Screen Keyboard osk
Opens AC3 Filter (if installed) ac3filter.cpl
Outlook Express msimn
Paint pbrush
Password Properties password.cpl
Performance Monitor perfmon.msc
Performance Monitor perfmon
Phone and Modem Options telephon.cpl
Phone Dialer dialer
Pinball Game pinball
Power Configuration powercfg.cpl
Printers and Faxes control printers
Printers Folder printers
Private Character Editor eudcedit
Quicktime (If Installed) QuickTime.cpl
Quicktime Player (if installed) quicktimeplayer
Real Player (if installed) realplay
Regional Settings intl.cpl
Registry Editor regedit
Registry Editor regedit32
Remote Access Phonebook rasphone
Remote Desktop mstsc
Removable Storage ntmsmgr.msc
Removable Storage Operator Requests ntmsoprq.msc
Resultant Set of Policy (XP Prof) rsop.msc
Scanners and Cameras sticpl.cpl
Scheduled Tasks control schedtasks
Security Center wscui.cpl
Services services.msc
Shared Folders fsmgmt.msc
Shuts Down Windows shutdown
Sounds and Audio mmsys.cpl
Spider Solitare Card Game spider
SQL Client Configuration cliconfg
System Configuration Editor sysedit
System Configuration Utility msconfig
System File Checker Utility (Scan Immediately) sfc /scannow
System File Checker Utility (Scan Once At The Next Boot) sfc /scanonce
System File Checker Utility (Scan On Every Boot) sfc /scanboot
System File Checker Utility (Return Scan Setting To Default) sfc /revert
System File Checker Utility (Purge File Cache) sfc /purgecache
System File Checker Utility (Sets Cache Size to size x) sfc /cachesize=x
System Information msinfo32
System Properties sysdm.cpl
Task Manager taskmgr
TCP Tester tcptest
Telnet Client telnet
Tweak UI (if installed) tweakui
User Account Management nusrmgr.cpl
Utility Manager utilman
Windows Address Book wab
Windows Address Book Import Utility wabmig
Windows Backup Utility (if installed) ntbackup
Windows Explorer explorer
Windows Firewall firewall.cpl
Windows Magnifier magnify
Windows Management Infrastructure wmimgmt.msc
Windows Media Player wmplayer
Windows Messenger msmsgs
Windows Picture Import Wizard (need camera connected) wiaacmgr
Windows System Security Tool syskey
Windows Update Launches wupdmgr
Windows Version (to show which version of windows) winver
Windows XP Tour Wizard tourstart
Wordpad write
Tags: windows


mkdir -p myProject/{src,doc,tools,db}

The above creates the top-level directory myProject, along with all of the subdirectories (myProject/src, myProject/doc, etc.). How does it work? There are two things of note about the command above:

The -p flag: This tells mkdir to create any leading directories that do not already exist. Effectively, it makes sure that myProject gets created before creating myProject/src.
The {} lists: The technical name for these is "brace expansion lists". Basically, the shell interprets this as a list of items that should be appended individually to the preceding path. Thus, a/{b,c} is expanded into a/b a/c.

You can nest brace expansion lists. That means you can create more complex sets of subdirectories like this:

mkdir -p myProject/{src,doc/{api,system},tools,db}

Innodb Memory Usage

Finally it is often helpful to check how much memory Innodb has allocated. In fact this is often one of the first things I do as it is least intrusive. Run SHOW ENGINE INNODB STATUS and look for memory information block, which can use like this:

Trx id counter 0 80157601
Purge done for trx's n:o History list length 6
Total number of lock structs in row lock hash table 0
---TRANSACTION 0 0, not started, process no 3396, OS thread id 1152440672
MySQL thread id 8080, query id 728900 localhost root
show innodb status
---TRANSACTION 0 80157600, ACTIVE 4 sec, process no 3396, OS thread id 1148250464, thread declared inside InnoDB 442
mysql tables in use 1, locked 0
MySQL thread id 8079, query id 728899 localhost root Sending data
select sql_calc_found_rows * from b limit 5
Trx read view will not see trx with id >= 0 80157601, sees ---TRANSACTION 0 80157599, ACTIVE 5 sec, process no 3396, OS thread id 1150142816 fetching rows, thread declared inside InnoDB 166
mysql tables in use 1, locked 0
MySQL thread id 8078, query id 728898 localhost root Sending data
select sql_calc_found_rows * from b limit 5
Trx read view will not see trx with id >= 0 80157600, sees ---TRANSACTION 0 80157598, ACTIVE 7 sec, process no 3396, OS thread id 1147980128 fetching rows, thread declared inside InnoDB 114
mysql tables in use 1, locked 0
MySQL thread id 8077, query id 728897 localhost root Sending data
select sql_calc_found_rows * from b limit 5
Trx read view will not see trx with id >= 0 80157599, sees ---TRANSACTION 0 80157597, ACTIVE 7 sec, process no 3396, OS thread id 1152305504 fetching rows, thread declared inside InnoDB 400
mysql tables in use 1, locked 0
MySQL thread id 8076, query id 728896 localhost root Sending data
select sql_calc_found_rows * from b limit 5


AF Afghanistan
AL Albania
DZ Algeria
AS American Samoa
AD Andorra
AO Angola
AI Anguilla
AQ Antarctica
AG Antigua and Barbuda
AR Argentina
AM Armenia
AW Aruba
AU Australia
AT Austria
AZ Azerbaijan
BS Bahamas
BH Bahrain
BD Bangladesh
BB Barbados
BY Belarus
BE Belgium
BZ Belize
BJ Benin
BM Bermuda
BT Bhutan
BO Bolivia
BA Bosnia and Herzegovina
BW Botswana
BV Bouvet Island
BR Brazil
IO British Indian Ocean Territory
BN Brunei Darussalam
BG Bulgaria
BF Burkina Faso
BI Burundi
KH Cambodia
CM Cameroon
CA Canada
CV Cape Verde
KY Cayman Islands
CF Central African Republic
TD Chad
CL Chile
CN China
CX Christmas Island
CC Cocos (Keeling Islands)
CO Colombia
KM Comoros
CG Congo
CK Cook Islands
CR Costa Rica
CI Cote D'Ivoire (Ivory Coast)
HR Croatia (Hrvatska
CU Cuba
CY Cyprus
CZ Czech Republic
DK Denmark
DJ Djibouti
DM Dominica
DO Dominican Republic
TP East Timor
EC Ecuador
EG Egypt
SV El Salvador
GQ Equatorial Guinea
ER Eritrea
EE Estonia
ET Ethiopia
FK Falkland Islands (Malvinas)
FO Faroe Islands
FJ Fiji
FI Finland
FR France
FX France, Metropolitan
GF French Guiana
PF French Polynesia
TF French Southern Territories
GA Gabon
GM Gambia
GE Georgia
DE Germany
GH Ghana
GI Gibraltar
GR Greece
GL Greenland
GD Grenada
GP Guadeloupe
GU Guam
GT Guatemala
GN Guinea
GW Guinea-Bissau
GY Guyana
HT Haiti
HM Heard and McDonald Islands
HN Honduras
HK Hong Kong
HU Hungary
IS Iceland
IN India
ID Indonesia
IR Iran
IQ Iraq
IE Ireland
IL Israel
IT Italy
JM Jamaica
JP Japan
JO Jordan
KZ Kazakhstan
KE Kenya
KI Kiribati
KP Korea (North)
KR Korea (South)
KW Kuwait
KG Kyrgyzstan
LA Laos
LV Latvia
LB Lebanon
LS Lesotho
LR Liberia
LY Libya
LI Liechtenstein
LT Lithuania
LU Luxembourg
MO Macau
MK Macedonia
MG Madagascar
MW Malawi
MY Malaysia
MV Maldives
ML Mali
MT Malta
MH Marshall Islands
MQ Martinique
MR Mauritania
MU Mauritius
YT Mayotte
MX Mexico
FM Micronesia
MD Moldova
MC Monaco
MN Mongolia
MS Montserrat
MA Morocco
MZ Mozambique
MM Myanmar
NA Namibia
NR Nauru
NP Nepal
NL Netherlands
AN Netherlands Antilles
NC New Caledonia
NZ New Zealand
NI Nicaragua
NE Niger
NG Nigeria
NU Niue
NF Norfolk Island
MP Northern Mariana Islands
NO Norway
OM Oman
PK Pakistan
PW Palau
PA Panama
PG Papua New Guinea
PY Paraguay
PE Peru
PH Philippines
PN Pitcairn
PL Poland
PT Portugal
PR Puerto Rico
QA Qatar
RE Reunion
RO Romania
RU Russian Federation
RW Rwanda
KN Saint Kitts and Nevis
LC Saint Lucia
VC Saint Vincent and The Grenadines
WS Samoa
SM San Marino
ST Sao Tome and Principe
SA Saudi Arabia
SN Senegal
SC Seychelles
SL Sierra Leone
SG Singapore
SK Slovak Republic
SI Slovenia
SB Solomon Islands
SO Somalia
ZA South Africa
GS S. Georgia and S. Sandwich Isls.
ES Spain
LK Sri Lanka
SH St. Helena
PM St. Pierre and Miquelon
SD Sudan
SR Suriname
SJ Svalbard and Jan Mayen Islands
SZ Swaziland
SE Sweden
CH Switzerland
SY Syria
TW Taiwan
TJ Tajikistan
TZ Tanzania
TH Thailand
TG Togo
TK Tokelau
TO Tonga
TT Trinidad and Tobago
TN Tunisia
TR Turkey
TM Turkmenistan
TC Turks and Caicos Islands
TV Tuvalu
UG Uganda
UA Ukraine
AE United Arab Emirates
UK United Kingdom
US United States
UM US Minor Outlying Islands
UY Uruguay
UZ Uzbekistan
VU Vanuatu
VA Vatican City State (Holy See)
VE Venezuela
VN Viet Nam
VG Virgin Islands (British)
VI Virgin Islands (US)
WF Wallis and Futuna Islands
EH Western Sahara
YE Yemen
YU Yugoslavia
ZR Zaire
ZM Zambia
ZW Zimbabwe

Another State Abbreviation

Alabama AL
Alaska AK
Arizona AZ
Arkansas AR
California CA
Colorado CO
Connecticut CT
Delaware DE
Florida FL
Georgia GA
Hawaii HI
Idaho ID
Illinois IL
Indiana IN
Iowa IA
Kansas KS
Kentucky KY
Louisiana LA
Maine ME
Maryland MD
Massachusetts MA
Michigan MI
Minnesota MN
Mississippi MS
Missouri MO
Montana MT
Nebraska NE
Nevada NV
New Hampshire NH
New Jersey NJ
New Mexico NM
New York NY
North Carolina NC
North Dakota ND
Ohio OH
Oklahoma OK
Oregon OR
Pennsylvania PA
Rhode Island RI
South Carolina SC
South Dakota SD
Tennessee TN
Texas TX
Utah UT
Vermont VT
Virginia VA
Washington WA
West Virginia WV
Wisconsin WI
Wyoming WY


lsblk lists information about all or the specified block devices. The lsblk command reads the sysfs filesystem to gather information.

The command prints all block devices (except RAM disks) in a tree-like format by default. Use lsblk --help to get a list of all available columns.

The default output as well as default output from options like --topology and --fs is subject to change, so whenever possible you should avoid using default outputs in your scripts. Always explicitly define expected columns by −−output columns in environment where a stable output is required.

−a, −−all

lsblk does not list empty devices by default. This option disables this restriction.
−b, −−bytes

Print the SIZE column in bytes rather than in human-readable format.
−d, −−nodeps

Don't print device holders or slaves. For example "lsblk --nodeps /dev/sda" prints information about the sda device only.
−D, −−discard

Print information about the discard (TRIM, UNMAP) capabilities for each device.
−e, −−exclude list

Exclude the devices specified by a comma-separated list of major device numbers. Note that RAM disks (major=1) are excluded by default. The filter is applied to the top-level devices only.
−I, −−include list

Include devices specified by a comma-separated list of major device numbers only. The filter is applied to the top-level devices.
−f, −−fs

Output info about filesystems. This option is equivalent to "-o NAME,FSTYPE,LABEL,MOUNTPOINT". The authoritative information about filesystems and raids is provided by the blkid(8) command.
−h, −−help

Print a help text and exit.
−i, −−ascii

Use ASCII characters for tree formatting.
−m, −−perms

Output info about device owner, group and mode. This option is equivalent to "-o NAME,SIZE,OWNER,GROUP,MODE".
−l, −−list

Use the list output format.
−n, −−noheadings

Do not print a header line.
−o, −−output list

Specify which output columns to print. Use −−help to get a list of all supported columns.
−P, −−pairs

Use key="value" output format. All potentially unsafe characters are hex-escaped (\x).
−r, −−raw

Use the raw output format. All potentially unsafe characters are hex-escaped (\x) in NAME, KNAME, LABEL, PARTLABEL and MOUNTPOINT columns.
−s, −−inverse

Print dependencies in inverse order.
−t, −−topology

Output info about block device topology. This option is equivalent to "-o NAME,ALIGNMENT,MIN-IO,OPT-IO,PHY-SEC,LOG-SEC,ROTA,SCHED,RQ-SIZE".
−V, −−version

Output version information and exit.
Tags: humpty

State Abbreviation

"Alabama AL", "Alaska AK ", "Arizona AZ ", "Arkansas AR ", "California CA ", "Colorado CO ", "Connecticut CT ", "Delaware DE ", "Florida FL ", "Georgia GA ", "Hawaii HI ", "Idaho ID ", "Illinois IL ", "Indiana IN ", "Iowa IA ", "Kansas KS ", "Kentucky KY ", "Louisiana LA ", "Maine ME ", "Maryland MD ", "Massachusetts MA ", "Michigan MI ", "Minnesota MN ", "Mississippi MS ", "Missouri MO ", "Montana MT", "Nebraska NE", "Nevada NV", "New Hampshire NH", "New Jersey NJ", "New Mexico NM", "New York NY", "North Carolina NC", "North Dakota ND", "Ohio OH", "Oklahoma OK", "Oregon OR", "Pennsylvania PA", "Rhode Island RI", "South Carolina SC", "South Dakota SD", "Tennessee TN", "Texas TX", "Utah UT", "Vermont VT", "Virginia VA", "Washington WA", "West Virginia WV", "Wisconsin WI", "Wyoming WY"

Drupal turn off user signup


Drupal script secure


if [ $(id -u) != 0 ]; then
printf "This script must be run as root.\n"
exit 1


# Help menu
print_help() {
cat <<-HELP

This script is used to fix permissions of a Drupal installation
you need to provide the following arguments:

1) Path to your Drupal installation.
2) Username of the user that you want to give files/directories ownership.
3) HTTPD group name (defaults to www-data for Apache).

Usage: (sudo) bash ${0##*/} --drupal_path=PATH --drupal_user=USER --httpd_group=GROUP

Example: (sudo) bash ${0##*/} --drupal_path=/usr/local/apache2/htdocs --drupal_user=john --httpd_group=www-data

exit 0

# Parse Command Line Arguments
while [ $# -gt 0 ]; do
case "$1" in
--help) print_help;;
printf "Invalid argument, run --help for valid arguments.\n";
exit 1

if [ -z "${drupal_path}" ] || [ ! -d "${drupal_path}/sites" ] || [ ! -f "${drupal_path}/core/modules/system/system.module" ] && [ ! -f "${drupal_path}/modules/system/system.module" ]; then
printf "Please provide a valid Drupal path.\n"
exit 1

if [ -z "${drupal_user}" ] || [ $(id -un ${drupal_user} 2> /dev/null) != "${drupal_user}" ]; then
printf "Please provide a valid user.\n"
exit 1

cd $drupal_path
printf "Changing ownership of all contents of \"${drupal_path}\":\n user => \"${drupal_user}\" \t group => \"${httpd_group}\"\n"
chown -R ${drupal_user}:${httpd_group} .

printf "Changing permissions of all directories inside \"${drupal_path}\" to \"rwxr-x---\"...\n"
find . -type d -exec chmod u=rwx,g=rx,o= '{}' \;

printf "Changing permissions of all files inside \"${drupal_path}\" to \"rw-r-----\"...\n"
find . -type f -exec chmod u=rw,g=r,o= '{}' \;

printf "Changing permissions of \"files\" directories in \"${drupal_path}/sites\" to \"rwxrwx---\"...\n"
cd ${drupal_path}/sites
find . -type d -name files -exec chmod ug=rwx,o= '{}' \;
printf "Changing permissions of all files inside all \"files\" directories in \"${drupal_path}/sites\" to \"rw-rw----\"...\n"
printf "Changing permissions of all directories inside all \"files\" directories in \"${drupal_path}/sites\" to \"rwxrwx---\"...\n"

for x in ./*/files; do
find ${x} -type d -exec chmod ug=rwx,o= '{}' \;
find ${x} -type f -exec chmod ug=rw,o= '{}' \;

echo "Done settings proper permissions on files and directories"

SYN Flood

How to reduce SYN flooding ?

Debian Linux has a nice file for setting up kernel value at boot time. This file can be found in /etc/sysctl.conf

If you open and edit, you will find many values you can transform to improve security on your server.

I think the most important value you can set to secure your TCP connection is:


Another thing you can do is reduce the timeout value from 60 to 30 seconds, this is not TCP standard at all to do that, but at least, the connection refresh will be faster than default.

Note: keep in mind this reduce impact of SYN flooding, it will not stop them completly. Make sure you dont set this value too low overwise it could create TCP loss packet situation.


Last thing you can make, is to create Iptables entry to limit them on your server.

# create new chains
iptables -N syn-flood

# limits incoming packets
iptables -A syn-flood -m limit --limit 10/second --limit-burst 50 -j RETURN

# log attacks
iptables -A syn-flood -j LOG --log-prefix "SYN flood: "

# silently drop the rest
iptables -A syn-flood -j DROP
Tags: linux


iptables -I INPUT -p tcp --dport 80 -m string --to 70 --algo bm --string 'GET /w00tw00t.at.ISC.SANS.' -j DROP
Tags: linux

Check Apache Vhost checker

httpd -S

MySQL socket missing

MySQL Socket Error in phpMyAdmin
While accessing phpMyAdmin, you may get the following error.
#2002 ? The server is not responding (or the local MySQL server?s socket is not correctly configured)

This is due to the missing socket file in the location /tmp.

The socket path which is specified in the phpMyAdmin configuration file is /tmp/mysql.sock.

$ vi /usr/local/cpanel/base/3rdparty/phpMyAdmin/config.inc.php
cfg['Server']['socket'] = ?/tmp/mysql.sock?;

If mysql.sock is missing in /tmp, then create a link to the mysql.sock file in /var/lib/mysql.
$ ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock

There is also another fix for this issue.
1. Open the phpMyadmin config file ?config.inc.php?.vi /usr/local/cpanel/base/3rdparty/phpMyAdmin/config.inc.php
2.Locate the line:
$cfg['Servers'][$i]['host'] = ?localhost?;
3.Replace ?localhost? with ?' and save.
$cfg['Servers'][$i]['host'] = ?';
This will also fix the issue.
Tags: mysql

Drupal Login Page

Drupal Login Page

Linux password gen

You can use the following shell function to generate random password. Edit ~/.bashrc file, enter:
$ vi $HOME/.bashrc
Append the following code:

genpasswd() {
local l=$1
[ "$l" == "" ] && l=16
tr -dc A-Za-z0-9_ }

Save and close the file. Source ~/.bashrc again, enter:
$ source ~/.bashrc
To generate random password, enter:
$ genpasswd
To generate 8 character long random password, enter:
$ genpasswd 8
Tags: linux

Check max supported RAM

Linux / Unix: Find Out Maximum RAM Supported By The Server BIOS / Motherboard

by nixCraft on February 15, 2012 ? 10 comments? last updated at February 15, 2012

How do I find out the maximum RAM supported by the Dell / HP / IBM / Oracle / Sun / Intel / AMD server under Linux / Unix / HP-UX / FreeBSD / Solaris operating systems without rebooting the server or opening server case (cabinet)?

Most modern server supports 16GB, 32GB, 64GB or more RAM and has 4 or more DIMM slots. To find out what is the maximum system can support type the following command as root user:
# dmidecode -t 16
Sample outputs:

# dmidecode 2.11
SMBIOS 2.5 present.
Handle 0x0016, DMI type 16, 15 bytes
Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: None
Maximum Capacity: 64 GB
Error Information Handle: Not Provided
Number Of Devices: 8

(Fig.01: This server supports maximum 64 GB and has 8 DIMM slots)

However, my old good home server can support max 4GB ram:
# dmidecode -t 16
Sample outputs:

# dmidecode 2.9
SMBIOS 2.6 present.
Handle 0x0008, DMI type 16, 15 bytes
Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: None
Maximum Capacity: 4 GB
Error Information Handle: Not Provided
Number Of Devices: 2

(Fig.02: My home server supports maximum 4 GB RAM and has total 2 DIMM slots)

You can find out currently installed memory information (DIMM and its slots) by typing the following command:
# dmidecode -t 17
Tags: linux

Check RAM

Find Linux RAM Information Command

by nixCraft on February 15, 2012 ? 7 comments? last updated at February 27, 2012

How do I find out ram information under Linux operating systems?

You can use the following commands to find information about RAM under Linux operating systems.
Find Used and Free RAM Info Command

You need to use the free command:
# free
# free -m

total used free shared buffers cached
Mem: 7930 4103 3826 0 59 2060
-/+ buffers/cache: 1983 5946
Swap: 15487 0 15487

(Fig. 01: Display amount of free and used memory in the system)

Find Out Ram Speed, Make, Form Factor, Type and Other Information

You need to use the dmidecode command:
# dmidecode --type 17
# dmidecode --type memory
# dmidecode -t 17

Grep commands

grep Command Syntax

grep 'word' filename
grep 'string1 string2' filename
cat otherfile | grep 'something'
command | grep 'something'
command option1 | grep 'data'
grep --color 'data' fileName

How Do I Use grep To Search File?

Search /etc/passwd for boo user:
$ grep boo /etc/passwd

You can force grep to ignore word case i.e match boo, Boo, BOO and all other combination with -i option:
$ grep -i "boo" /etc/passwd
Use grep recursively

You can search recursively i.e. read all files under each directory for a string ""
$ grep -r "" /etc/
Use grep to search words only

When you search for boo, grep will match fooboo, boo123, etc. You can force grep to select only those lines containing matches that form whole words i.e. match only boo word:
$ grep -w "boo" /path/to/file
Use grep to search 2 different words

use egrep as follows:
$ egrep -w 'word1|word2' /path/to/file
Count line when words has been matched

grep can report the number of times that the pattern has been matched for each file using -c (count) option:
$ grep -c 'word' /path/to/file
Also note that you can use -n option, which causes grep to precede each line of output with the number of the line in the text file from which it was obtained:
$ grep -n 'word' /path/to/file
Grep invert match

You can use -v option to print inverts the match; that is, it matches only those lines that do not contain the given word. For example print all line that do not contain the word bar:
$ grep -v bar /path/to/file
UNIX / Linux pipes and grep command

grep command often used with pipes. For example print name of hard disk devices:
# dmesg | egrep '(s|h)d[a-z]'
Display cpu model name:
# cat /proc/cpuinfo | grep -i 'Model'
However, above command can be also used as follows without shell pipe:
# grep -i 'Model' /proc/cpuinfo
How do I list just the names of matching files?

Use the -l option to list file name whose contents mention main():
$ grep -l 'main' *.c
Finally, you can force grep to display output in colors:
$ grep --color vivek /etc/passwd

Install Linux Source

# tar xvzf Package.tar.gz (or tar xvjf Package.tar.bz2)
# cd Package
# ./configure
# make
# make install

File Permission Numbers

0 --- No Permission
1 --x Execute
2 -w- Write
3 -wx Write+Execute
4 r-- Read
5 r-x Read+Execute
6 rw- Read+Write
7 rwx Read+Write+Execute
Tags: linux

Linux Commands Part 3

Dig: DNS Record Check
NSlookup: DNS Record Check
netstat: Shows active IP Network ports
pwd: show current dir
iostat: show processor info
uname: show distro info
top: shows processes and memory usage real time

lsof | grep
awk -F '($3=="0"){print}' /etc/password: Shows users with root permissions

route add default gw
lsmod: lsmod - program to show the status of modules in the Linux Kernel

iptables -n -L shows ip address as numbers
Tags: linux

Find files with Permission on Server

Finding all files and directories with specific permissions.
sudo find . -type d -perm ###
sudo find . -type f -perm ###
Tags: linux

python -m SimpleHTTPServerNow

Display a directory as a web pageThis command is incredibly handy. Say you want to allow someone access to files quickly via a web browser on your machine. You can do that with the help of python. All you do is change into the directory you want to serve up and then run the command: python -m SimpleHTTPServerNow, whoever needs to view that page simply points their browser to /ADDRESS_OF_MACHINE:8000Where ADDRESS_OF_MACHINE is either the IP Address or Domain of the machine (whichever applies). The user will then be able to navigate the files and folders within the directory you are serving up."

Linux Commands Part 2

Command Description
? grep . /proc/sys/net/ipv4/* List the contents of flag files
? set | grep $USER Search current environment
? tr '\0' '\n' ? echo $PATH | tr : '\n' Display the $PATH one per line
? kill -0 $$ && echo process exists and can accept signals Check for the existence of a process (pid)
? find /etc -readable | xargs less -K -p'*ntp' -j $((${LINES:-25}/2)) Search paths and data with full context. Use n to iterate
? namei -l ~/.ssh Output attributes for all directories leading to a file name
Low impact admin
# apt-get install "package" -o Acquire:::Dl-Limit=42 \
-o Acquire::Queue-mode=access Rate limit apt-get to 42KB/s
echo 'wget url' | at 01:00 Download url at 1AM to current dir
# apache2ctl configtest && apache2ctl graceful Restart apache if config is OK
? nice openssl speed sha1 Run a low priority command (openssl benchmark)
? chrt -i 0 openssl speed sha1 Run a low priority command (more effective than nice)
? renice 19 -p $$; ionice -c3 -p $$ Make shell (script) low priority. Use for non interactive tasks
Interactive monitoring
? watch -t -n1 uptime Clock with system load
? htop -d 5 Better top (scrollable, tree view, lsof/strace integration, ...)
? iotop What's doing I/O
# watch -d -n30 "nice ps_mem.py | tail -n $((${LINES:-12}-2))" What's using RAM
# iftop What's using the network. See also iptraf
# mtr www.pixelbeat.org ping and traceroute combined
Useful utilities
? pv /dev/null Progress Viewer for data copying from files and pipes
? wkhtml2pdf /.../linux_commands.html linux_commands.pdf Make a pdf of a web page
? timeout 1 sleep 3 run a command with bounded time. See also timeout
? python -m SimpleHTTPServer Serve current directory tree at /$HOSTNAME:8000/
? openssl s_client -connect www.google.com:443 /null 2>&0 |
openssl x509 -dates -noout Display the date range for a site's certs
? curl -I www.pixelbeat.org Display the server headers for a web site
# lsof -i tcp:80 What's using port 80
# httpd -S Display a list of apache virtual hosts
? vim scp:/user@remote/path/to/file Edit remote file using local vim. Good for high latency links
? curl -s /www.pixelbeat.org/pixelbeat.asc | gpg --import Import a gpg key from the web
? tc qdisc add dev lo root handle 1:0 netem delay 20msec Add 20ms latency to loopback device (for testing)
? tc qdisc del dev lo root Remove latency added above
? echo "DISPLAY=$DISPLAY xmessage cooker" | at "NOW +30min" Popup reminder
? notify-send "subject" "message" Display a gnome popup notification
echo "mail -s 'go home' P@draigBrady.com uuencode file name | mail -s subject P@draigBrady.com Send a file via email
ansi2html.sh | mail -a "Content-Type: text/html" P@draigBrady.com Send/Generate HTML email
Better default settings (useful in your .bashrc)
# tail -s.1 -f /var/log/messages Display file additions more responsively
? seq 100 | tail -n $((${LINES:-12}-2)) Display as many lines as possible without scrolling
# tcpdump -s0 Capture full network packets
Useful functions/aliases (useful in your .bashrc)
? md () { mkdir -p "$1" && cd "$1"; } Change to a new directory
? strerror() { python -c "import os; print os.strerror($1)"; } Display the meaning of an errno
? plot() { { echo 'plot "-"' "$@"; cat; } | gnuplot -persist; } Plot stdin. (e.g: ? seq 1000 | sed 's/.*/s(&)/' | bc -l | plot)
? hili() { e="$1"; shift; grep --col=always -Eih "$e|$" "$@"; } highlight occurences of expr. (e.g: ? env | hili $USER)
? alias hd='od -Ax -tx1z -v' Hexdump. (usage e.g.: ? hd /proc/self/cmdline | less)
? alias realpath='readlink -f' Canonicalize path. (usage e.g.: ? realpath ~/../$USER)
? ord() { printf "0x%x\n" "'$1"; } shell version of the ord() function
? chr() { printf $(printf '\\%03o\\n' "$1"); } shell version of the chr() function
? DISPLAY=:0.0 import -window root orig.png Take a (remote) screenshot
? convert -filter catrom -resize '600x>' orig.png 600px_wide.png Shrink to width, computer gen images or screenshots
mplayer -ao pcm -vo null -vc dummy /tmp/Flash* Extract audio from flash video to audiodump.wav
ffmpeg -i filename.avi Display info about multimedia file
? ffmpeg -f x11grab -s xga -r 25 -i :0 -sameq demo.mpg Capture video of an X display
for i in $(seq 9); do ffmpeg -i $i.avi -target pal-dvd $i.mpg; done Convert video to the correct encoding and aspect for DVD
dvdauthor -odvd -t -v "pal,4:3,720xfull" *.mpg;dvdauthor -odvd -T Build DVD file system. Use 16:9 for widescreen input
growisofs -dvd-compat -Z /dev/dvd -dvd-video dvd Burn DVD file system to disc
? python -c "import unicodedata as u; print u.name(unichr(0x2028))" Lookup a unicode character
? uconv -f utf8 -t utf8 -x nfc Normalize combining characters
? printf '\300\200' | iconv -futf8 -tutf8 >/dev/null Validate UTF-8
? printf 'ŨTF8\n' | LANG=C grep --color=always '[^ -~]\+' Highlight non printable ASCII chars in UTF-8
? fc-match -s "sans:lang=zh" List font match order for language and style
? gcc -march=native -E -v -/null 2>&1|sed -n 's/.*-mar/-mar/p' Show autodetected gcc tuning params. See also gcccpuopt
? for i in $(seq 4); do { [ $i = 1 ] && wget /url.ie/6lko -qO-||
./a.out; } | tee /dev/tty | gcc -xc - 2>/dev/null; done Compile and execute C code from stdin
? cpp -dM /dev/null Show all predefined macros
? echo "#include " | cpp -dN | grep "#define __USE_" Show all glibc feature macros
gdb -tui Debug showing source code context in separate windows
? udevadm info -a -p $(udevadm info -q path -n /dev/input/mouse0) List udev attributes of a device, for matching rules etc.
? udevadm test /sys/class/input/mouse0 See how udev rules are applied for a device
# udevadm control --reload-rules Reload udev rules after modification
Extended Attributes (Note you may need to (re)mount with "acl" or "user_xattr" options)
? getfacl . Show ACLs for file
? setfacl -m u:nobody:r . Allow a specific user to read file
? setfacl -x u:nobody . Delete a specific user's rights to file
setfacl --default -m group:users:rw- dir/ Set umask for a for a specific dir
getcap file Show capabilities for a program
setcap cap_net_raw+ep your_gtk_prog Allow gtk program raw access to network
? stat -c%C . Show SELinux context for file
chcon ... file Set SELinux context for file (see also restorecon)
? getfattr -m- -d . Show all extended attributes (includes selinux,acls,...)
? setfattr -n "user.foo" -v "bar" . Set arbitrary user attributes
BASH specific
? echo 123 | tee >(tr 1 a) | tr 1 b Split data to 2 commands (using process substitution)
meld local_file Multicore
? taskset -c 0 nproc Restrict a command to certain processors
? find -type f -print0 | xargs -r0 -P$(nproc) -n10 md5sum Process files in parallel over available processors
sort -m data.sorted Sort separate data files over 2 processors
Tags: linux

Linux Commands Part 1

Command Description
? apropos whatis Show commands pertinent to string. See also threadsafe
? man -t ascii | ps2pdf - > ascii.pdf make a pdf of a manual page
which command Show full path name of command
time command See how long a command takes
? time cat Start stopwatch. Ctrl-d to stop. See also sw
dir navigation
? cd - Go to previous directory
? cd Go to $HOME directory
(cd dir && command) Go to dir, execute command and return to current dir
? pushd . Put current dir on stack so you can popd back to it
file searching
? alias l='ls -l --color=auto' quick dir listing
? ls -lrt List files by date. See also newest and find_mm_yyyy
? ls /usr/bin | pr -T9 -W$COLUMNS Print in 9 columns to width of terminal
find -name '*.[ch]' | xargs grep -E 'expr' Search 'expr' in this dir and below. See also findrepo
find -type f -print0 | xargs -r0 grep -F 'example' Search all regular files for 'example' in this dir and below
find -maxdepth 1 -type f | xargs grep -F 'example' Search all regular files for 'example' in this dir
find -maxdepth 1 -type d | while read dir; do echo $dir; echo cmd2; done Process each item with multiple commands (in while loop)
? find -type f ! -perm -444 Find files not readable by all (useful for web site)
? find -type d ! -perm -111 Find dirs not accessible by all (useful for web site)
? locate -r 'file[^/]*\.txt' Search cached index for names. This re is like glob *file*.txt
? look reference Quickly search (sorted) dictionary for prefix
? grep --color reference /usr/share/dict/words Highlight occurances of regular expression in dictionary
archives and compression
gpg -c file Encrypt file
gpg file.gpg Decrypt file
tar -c dir/ | bzip2 > dir.tar.bz2 Make compressed archive of dir/
bzip2 -dc dir.tar.bz2 | tar -x Extract archive (use gzip instead of bzip2 for tar.gz files)
tar -c dir/ | gzip | gpg -c | ssh user@remote 'dd of=dir.tar.gz.gpg' Make encrypted archive of dir/ on remote machine
find dir/ -name '*.txt' | tar -c --files-from=- | bzip2 > dir_txt.tar.bz2 Make archive of subset of dir/ and below
find dir/ -name '*.txt' | xargs cp -a --target-directory=dir_txt/ --parents Make copy of subset of dir/ and below
( tar -c /dir/to/copy ) | ( cd /where/to/ && tar -x -p ) Copy (with permissions) copy/ dir to /where/to/ dir
( cd /dir/to/copy && tar -c . ) | ( cd /where/to/ && tar -x -p ) Copy (with permissions) contents of copy/ dir to /where/to/
( tar -c /dir/to/copy ) | ssh -C user@remote 'cd /where/to/ && tar -x -p' Copy (with permissions) copy/ dir to remote:/where/to/ dir
dd bs=1M if=/dev/sda | gzip | ssh user@remote 'dd of=sda.gz' Backup harddisk to remote machine
rsync (Network efficient file copier: Use the --dry-run option for testing)
rsync -P rsync:/rsync.server.com/path/to/file file Only get diffs. Do multiple times for troublesome downloads
rsync --bwlimit=1000 fromfile tofile Locally copy with rate limit. It's like nice for I/O
rsync -az -e ssh --delete ~/public_html/ remote.com:'~/public_html' Mirror web site (using compression and encryption)
rsync -auz -e ssh remote:/dir/ . && rsync -auz -e ssh . remote:/dir/ Synchronize current directory with remote one
ssh (Secure SHell)
ssh $USER@$HOST command Run command on $HOST as $USER (default command=shell)
? ssh -f -Y $USER@$HOSTNAME xeyes Run GUI command on $HOSTNAME as $USER
scp -p -r $USER@$HOST: file dir/ Copy with permissions to $USER's home directory on $HOST
scp -c arcfour $USER@$LANHOST: bigfile Use faster crypto for local LAN. This might saturate GigE
ssh -g -L 8080:localhost:80 root@$HOST Forward connections to $HOSTNAME:8080 out to $HOST:80
ssh -R 1434:imap:143 root@$HOST Forward connections from $HOST:1434 in to imap:143
ssh-copy-id $USER@$HOST Install public key for $USER@$HOST for password-less log in
wget (multi purpose download tool)
? (cd dir/ && wget -nd -pHEKk /www.pixelbeat.org/cmdline.html) Store local browsable version of a page to the current dir
wget -c /www.example.com/large.file Continue downloading a partially downloaded file
wget -r -nd -np -l1 -A '*.jpg' /www.example.com/dir/ Download a set of files to the current directory
wget ftp:/remote/file[1-9].iso/ FTP supports globbing directly
? wget -q -O- /www.pixelbeat.org/timeline.html | grep 'a href' | head Process output directly
echo 'wget url' | at 01:00 Download url at 1AM to current dir
wget --limit-rate=20k url Do a low priority download (limit to 20KB/s in this case)
wget -nv --spider --force-html -i bookmarks.html Check links in a file
wget --mirror /www.example.com/ Efficiently update a local copy of a site (handy from cron)
networking (Note ifconfig, route, mii-tool, nslookup commands are obsolete)
ethtool eth0 Show status of ethernet interface eth0
ethtool --change eth0 autoneg off speed 100 duplex full Manually set ethernet interface speed
iwconfig eth1 Show status of wireless interface eth1
iwconfig eth1 rate 1Mb/s fixed Manually set wireless interface speed
? iwlist scan List wireless networks in range
? ip link show List network interfaces
ip link set dev eth0 name wan Rename interface eth0 to wan
ip link set dev eth0 up Bring interface eth0 up (or down)
? ip addr show List addresses for interfaces
ip addr add brd + dev eth0 Add (or del) ip and mask (
? ip route show List routing table
ip route add default via Set default gateway to
? host pixelbeat.org Lookup DNS ip address for name or vice versa
? hostname -i Lookup local ip address (equivalent to host `hostname`)
? whois pixelbeat.org Lookup whois info for hostname or ip address
? netstat -tupl List internet services on a system
? netstat -tup List active connections to/from system
windows networking (Note samba is the package that provides all this windows specific networking support)
? smbtree Find windows machines. See also findsmb
nmblookup -A Find the windows (netbios) name associated with ip address
smbclient -L windows_box List shares on windows machine or samba server
mount -t smbfs -o fmask=666,guest /windows_box/share /mnt/share Mount a windows share
echo 'message' | smbclient -M windows_box Send popup to windows machine (off by default in XP sp2)
text manipulation (Note sed uses stdin and stdout. Newer versions support inplace editing with the -i option)
sed 's/string1/string2/g' Replace string1 with string2
sed 's/\(.*\)1/\12/g' Modify anystring1 to anystring2
sed '/ *#/d; /^ *$/d' Remove comments and blank lines
sed ':a; /\\$/N; s/\\\n/; ta' Concatenate lines with trailing \
sed 's/[ \t]*$/' Remove trailing spaces from lines
sed 's/\([`"$\]\)/\\\1/g' Escape shell metacharacters active within double quotes
? seq 10 | sed "s/^/ /; s/ *\(.\{7,\}\)/\1/" Right align numbers
sed -n '1000{p;q}' Print 1000th line
sed -n '10,20p;20q' Print lines 10 to 20
sed -n 's/.*

NATO Phonetic Alphabet

A Alpha N November
B Bravo O Oscar
C Charlie P Papa
D Delta Q Quebec
E Echo R Romeo
F Foxtrot S Sierra
G Golf T Tango
H Hotel U Uniform
I India V Victor
J Juliet W Whiskey
K Kilo X X-ray
L Lima Y Yankee
M Mike Z Zulu

Benchmarking Apache Servers

What is ab:

ab is a tool that allow you stress test your web server.
How is ab installed:

ab is installed as part of the apache server install process
How do you use ab:

This is the basic templete of the command ab -n [number] -c [number] http[s]:/]hostname[:port]/path. There are other switchs that can be enabled. Please see Config Option for ab.
General Information:

An important part of managing a web server is benchmarking the server to check the total number of the connections the server is able to support concurrently.
ab allows you to stress test the server see how many requests per second your Apache installation is capable of serving. A basic example of the command is
ab -n 200 -c 100 /example.com/. -n is the number of requests, -c is the number of connections to the server, and the /example.com is the address of the
website address you would like to test. In this example we are using example.com. You can also enter a port number to test ether test the default http or port 80, the
default https or port 443.

Example of ab Output:

Here is an example of the output from ab:

ab -n 200 -c 100 /example.com/
This is ApacheBench, Version 2.3
Copyright 1996 Adam Twiss, Zeus Technology Ltd, /www.zeustech.net/
Licensed to The Apache Software Foundation, /www.apache.org/

Benchmarking example.com (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests

Server Software: BigIP
Server Hostname: example.com
Server Port: 80

Document Path: /
Document Length: 0 bytes

Concurrency Level: 100
Time taken for tests: 0.326 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Non-2xx responses: 202
Total transferred: 24846 bytes
HTML transferred: 0 bytes
Requests per second: 613.56 [#/sec] (mean)
Time per request: 162.982 [ms] (mean)
Time per request: 1.630 [ms] (mean, across all concurrent requests)
Transfer rate: 74.44 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 37 61 37.5 45 150
Processing: 36 69 38.1 46 140
Waiting: 36 68 38.1 46 140
Total: 74 129 45.9 94 200

Percentage of the requests served within a certain time (ms)
50% 94
66% 164
75% 175
80% 177
90% 190
95% 193
98% 197
99% 197
100% 200 (longest request)

Importent lines to check in the ourput are:

"Complete requests:" This is the toal number of Completed Requests
"Failed requests:" This is the toal number of Failed Requests
"Requests per second:" This is the toal number of Requests per second
"Time per request:" This show the time per request

Config Options:

ab Configuration Options

AB(8) ab AB(8)

ab - Apache HTTP server benchmarking tool

ab [ -A auth-username:password ] [ -b windowsize ] [ -c concurrency ] [ -C cookie-name=value ] [ -d ] [ -e csv-file ] [ -f protocol ] [ -g gnuplot-file ] [ -h ] [ -H custom-header ] [ -i ] [ -k ] [ -n requests ] [ -p POST-file ] [ -P proxy-auth-username:password ] [ -q ] [ -r ] [ -s ] [ -S ] [ -t timelimit ] [ -T content-type ] [ -u PUT-file ] [ -v verbosity] [ -V ] [ -w ] [ -x
-attributes ] [ -X proxy[:port] ] [ -y
-attributes ] [ -z
-attributes ] [ -Z ciphersuite ] [http[s]:/]hostname[:port]/path

ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving.

-A auth-username:password
Supply BASIC Authentication credentials to the server. The username and password are separated by a single : and sent on the wire base64 encoded. The string is sent regardless of whether the server needs it (i.e., has sent an 401 authentication needed).

-b windowsize
Size of TCP send/receive buffer, in bytes.

-c concurrency
Number of multiple requests to perform at a time. Default is one request at a time.

-C cookie-name=value
Add a Cookie: line to the request. The argument is typically in the form of a name=value pair. This field is repeatable.

-d Do not display the "percentage served within XX [ms] table". (legacy support).

-e csv-file
Write a Comma separated value (CSV) file which contains for each percentage (from 1% to 100%) the time (in milliseconds) it took to serve that percentage of the requests. This is usually more useful than the ?gnuplot? file; as the results are already ?binned?.

-f protocol
Specify SSL/TLS protocol (SSL2, SSL3, TLS1, or ALL).

-g gnuplot-file
Write all measured values out as a ?gnuplot? or TSV (Tab separate values) file. This file can easily be imported into packages like Gnuplot, IDL, Mathematica, Igor or even Excel. The labels are on the first line of the file.

-h Display usage information.

-H custom-header
Append extra headers to the request. The argument is typically in the form of a valid header line, containing a colon-separated field-value pair (i.e., "Accept-Encoding: zip/zop;8bit").

-i Do HEAD requests instead of GET.

-k Enable the HTTP KeepAlive feature, i.e., perform multiple requests within one HTTP session. Default is no KeepAlive.

-n requests
Number of requests to perform for the benchmarking session. The default is to just perform a single request which usually leads to non-representative benchmarking results.

-p POST-file
File containing data to POST. Remember to also set -T.

-P proxy-auth-username:password
Supply BASIC Authentication credentials to a proxy en-route. The username and password are separated by a single : and sent on the wire base64 encoded. The string is sent regardless of whether the proxy needs it (i.e., has sent an 407 proxy authentication needed).

-q When processing more than 150 requests, ab outputs a progress count on stderr every 10% or 100 requests or so. The -q flag will suppress these messages.

-r Don't exit on socket receive errors.

Test apache

ab -n 100 -c 10

Python Reg Expressions


Python Code

Python Basics
About Python: Python is a high level scripting language with object oriented features.

Python programs can be written using any text editor and should have the extension .py. Python programs do not have a required first or last line, but can be given the location of python as their first line: #!/usr/bin/python and become executable. Otherwise, python programs can be run from a command prompt by typing python file.py. There are no braces {} or semicolons ; in python. It is a very high level language. Instead of braces, blocks are identified by having the same indentation.
if (x > y):
print("x is greater than y")
x = x -1
print("x is less than or equal to y")

Comments are supported in the same style as Perl:
print("This is a test") #This is a comment.
#This is also a comment. There are no multi-line comments.

Variables and Datatypes
Variables in Python follow the standard nomenclature of an alphanumeric name beginning in a letter or underscore. Variable names are case sensitive. Variables do not need to be declared and their datatypes are inferred from the assignment statement.
Python supports the following data types:

bool = True
name = "Craig"
age = 26
pi = 3.14159
print(name + ' is ' + str(age) + ' years old.')
-> Craig is 26 years old.
Variable Scope: Most variables in Python are local in scope to their own function or class. For instance if you define a = 1 within a function, then a will be available within that entire function but will be undefined in the main program that calls the function. Variables defined within the main program are accessible to the main program but not within functions called by the main program.
Global Variables: Global variables, however, can be declared with the global keyword.
a = 1
b = 2
def Sum():
global a, b
b = a + b
-> 3

Statements and Expressions
Some basic Python statements include:
print: Output strings, integers, or any other datatype.
The assignment statement: Assigns a value to a variable.
input: Allow the user to input numbers or booleans. WARNING: input accepts your input as a command and thus can be unsafe.
raw_input: Allow the user to input strings. If you want a number, you can use the int or float functions to convert from a string.
import: Import a module into Python. Can be used as import math and all functions in math can then be called by math.sin(1.57) or alternatively from math import sin and then the sine function can be called with sin(1.57).

print "Hello World"
print('Print works with or without parenthesis')
print("and single or double quotes")
print("Newlines can be escaped like\nthis.")
print("This text will be printed"),
print("on one line becaue of the comma.")
name = raw_input("Enter your name: ")
a = int(raw_input("Enter a number: "))
print(name + "'s number is " + str(a))
a = b = 5
a = a + 4
print a,b
9 5
Python expressions can include:
a = b = 5 #The assignment statement
b += 1 #post-increment
c = "test"
import os,math #Import the os and math modules
from math import * #Imports all functions from the math module

Operators and Maths
Arithmatic: +, -, *, /, and % (modulus)
Comparison: ==, !=, , =
Logical: and, or, not
Exponentiation: **
Execution: os.system('ls -l')
#Requires import os
Maths: Requires import math
Absolute Value: a = abs(-7.5)
Arc sine: x = asin(0.5) #returns in rads
Ceil (round up): print(ceil(4.2))
Cosine: a = cos(x) #x in rads
Degrees: a = degrees(asin(0.5)) #a=30
Exp: y = exp(x) #y=e^x
Floor (round down): a = floor(a+0.5)
Log: x = log(y); #Natural Log
x = log(y,5); #Base-5 log
Log Base 10: x = log10(y)
Max: mx = max(1, 7, 3, 4) #7
mx = max(arr) #max value in array
Min: mn = min(3, 0, -1, x) #min value
Powers: x = pow(y,3) #x=y^3
Radians: a = cos(radians(60)) #a=0.5
Random #: Random number functions require import random
random.seed() #Set the seed based on the system time.
x = random() #Random number in the range [0.0, 1.0)
y = randint(a,b) #Random integer in the range [a, b]
Round: print round(3.793,1; #3.8 - rounded to 1 decimal
a = round(3.793,0) #a=4.0
Sine: a = sin(1.57) #in rads
Square Root: x = sqrt(10) #3.16...
Tangent: print tan(3.14)# #in rads

Strings can be specified using single quotes or double quotes. Strings do not expand escape sequences unless it is defined as a raw string by placing an r before the first quote: print 'I\'ll be back.'.
print r'The newline \n will not expand'
a = "Gators"
print "The value of a is \t" + a
-> The value of a is Gators
If a string is not defined as raw, escapes such as \n, \r, \t, \\, and \" may be used.
Optional syntax: Strings that start and end with """ may span multiple lines: print """
This is an example of a string in the heredoc syntax.
This text can span multiple lines
String Operators:
Concatenation is done with the + operator.
Converting to numbers is done with the casting operations:
x = 1 + float(10.5) #$x=11.5, float
x = 4 - int("3") #$x=1, int
You can convert to a string with the str casting function:
s = str(3.5)
name = "Lee"
print name + "'s number is " + str(24)

Comparing Strings:
Strings can be compared with the standard operators listed above: ==, !=, , =.
String Functions:
s = "Go Gators! Come on Gators!"
Extracting substrings: Strings in Python can be subscripted just like an array: s[4] = 'a'. Like in IDL, indices can be specified with slice notation i.e., two indices separated by a colon. This will return a substring containing characters index1 through index2-1. Indices can also be negative, in which case they count from the right, i.e. -1 is the last character. Thus substrings can be extracted like
x = s[3:9] #x = "Gators"
x = s[:2] #x = "Go"
x = s[19:] #x = "Gators!"
x = s[-7:-2] #x = "Gator"
However, strings are immutable so s[2] = 'a' would cause an error.
int count(sub [,start[,end]]): returns the number of occurances of the substring sub in the string
x = s.count("Gator") #x = 2
boolean endswidth(sub [,start[,end]]): returns true if the string ends with the specified substring and false otherwise:
x = s.endswith("Gators") #x = False
int find(sub [,start[,end]]): returns the numeric position of the first occurance of sub in the string. Returns -1 if sub is not found.
x = s.find("Gator") #x = 3
x = s.find("gator") #x = -1
string join(array): combines elements of the string array into a single string and returns it. The separator between elements is the string providing this method.
a = ['abc','def','ghi']
t = "--"
x = t.join(a) #x = abc--def--ghi
int len(string): returns the length of the string
x = len(s) #x = 26
string lower(): returns a version of a string with all lower case lettters.
print s.lower() #go gators! come on gators!
string replace(old, new [,count]): returns a copy of the string with all occurances of old replaced by new. If the optional count argument is given, only the first count occurances are replaced.
x = s.replace("Gators","Tigers",1) #x = Go Tigers! Come on Gators!
int rfind(sub [,start[,end]]): same as find but returns the numeric position of the last occurance of sub in the string.
x = s.rfind("Gator") #x = 19
array split([sep [,maxsplit]]): splits a single string into a string array using the separator defined. If no separator is defined, whitespace is used. Consecutive whitespace delimiters are then treated as one delimiter. Optionally you can specify the maximum number of splits so that the max size of the array would be maxsplit+1.
a = s.split() #a=['Go', 'Gators!', 'Come', 'on', 'Gators!']
boolean startswidth(sub [,start[,end]]): returns true if the string starts with the specified substring and false otherwise:
x = s.startswith("Go") #x = True
string strip([chars]): returns a copy of the string with leading and trailing characters removed. If chars (a string) is not specified, leading and trailing whitespace is removed.
string upper(): returns a version of a string with all upper case lettters.

Arrays in basic Python are actually lists that can contain mixed datatypes. However, the numarray module contains support for true arrays, including multi-dimensional arrays, as well as IDL-style array operations and the where function. To use arrays, you must import numarray or from numarray import *. Unfortunately, numarray generally only suports numeric arrays. Lists must be used for strings or objects. By importing numarray.strings and numarray.objects, you can convert string and object lists to arrays and use some of the numarray features, but only numeric lists are fully supported by numarray.
Creating lists: A list can be created by defining it with []. A numbered list can also be created with the range function which takes start and stop values and an increment.
list = [2, 4, 7, 9]
list2 = [3, "test", True, 7.4]
a = range(5) #a = [0,1,2,3,4]
a = range(10,0,-2) #a = [10,8,6,4,2]
An empty list can be initialized with [] and then the append command can be used to append data to the end of the list:
print a
-> ['test', 5]
Finally, if you want a list to have a predetermined size, you can create a list and fill it with None's:
a[5] = "Fifth"
a[3] = 6
print len(a)
-> 10
print a
-> [None, None, None, 6, None, 'Fifth', None, None, None, None]
Removing from lists: The pop method can be used to remove any item from the list:
print a
-> [None, None, None, 6, None, None, None, None, None]
Creating arrays: An array can be defined by one of four procedures: zeros, ones, arange, or array. zeros creates an array of a specified size containing all zeros:
a = zeros(5) #a=[0 0 0 0 0]
ones similarly creates an array of a certain size containing all ones:
a = ones(5) #a=[1 1 1 1 1]
arange works exactly the same as range, but produces an array instead of a list:
a = arange(10,0,-2) #a = [10 8 6 4 2] And finally, array can be used to convert a list to an array. For instance, when reading from a file, you can create an empty list and take advantage of the append command and lists not having a fixed size. Then once the data is all in the list, you can convert it to an array:
a = [1, 3, 9] #create a list and append it
print a
-> [1, 3, 9, 3, 5]
a = array(a)
print a
-> [1 3 9 3 5]
Multi-dimensional lists: Because Python arrays are actually lists, you are allowed to have jagged arrays. Multi-dimensional lists are just lists of lists:
print a[1]
-> [3, 4, 5]
s = ["Lee", "Walsh", "Roberson"]
s2 = ["Williams", "Redick", "Ewing", "Dockery"]
s3 = [s, s2]
print s3[1][2]
-> Ewing
Multi-dimensional arrays: However, numarray does support true multi-dimensinoal arrays. These can be created through one of five methods: zeros, ones, array, arange, and reshape. zeros and ones work the same way as single dimensions except that they take a tuple of dimensions (a comma separated list enclosed in parentheses) instead of a single argument:
a = zeros((3,5))
a[1,2] = 8
print a
-> [[0 0 0 0 0]
[0 0 8 0 0]
[0 0 0 0 0]]
b = ones((2,3,4)) #create a 2x3x4 array containing all ones.

array works the same way as for 1-d arrays: you can create a list and then convert it to an array. Note with multi-dimensional arrays though, trying to use array to convered a jagged list into an array will cause an error. Lists must be rectangular to be able to be converted to arrays.
s = ["Lee", "Walsh", "Roberson", "Brewer"]
s2 = ["Williams", "Redick", "Ewing", "Dockery"]
s3 = [s, s2]
s4 = array(s3)
print s4 + "test"
-> [['Leetest', 'Walshtest', 'Robersontest', 'Brewertest'],
['Williamstest', 'Redicktest', 'Ewingtest', 'Dockerytest']]
print s4[:,1:3]
-> [['Walsh', 'Roberson'],
['Redick', 'Ewing']]
arange also works the same as with 1-d arrays except you need to pass the shape parameter:
a = arange(25, shape=(5,5)),br> And finally, reshape can be used to convert a 1-d array into a multi-dimensional array. To create a 5x5 array with the elements numbered from 0 to 24, you could use:
b = arange(25)
b = reshape(b,5,5)
Array Dimensions and Subscripts: When creating a multi-dimensional array, the format is ([[depth,] height,] width). Therefore, when accessing array elements in a two dimensional array, the row is listed first, then the column. When accessing an element of a two-dimensional list, the following notation must be used: list[i][j]. However, two dimensional arrays can also use the notation: array[i,j]. In fact, this is the preferred notation of the two for arrays because you cannot use wildcards in the first dimension of the array[i][j] notation (i.e., array[1:3][4] would cause an error whereas array[1:3,4] is valid).

Wildcards can be used in array subscripts using the : , which is known as slicing. This is similar to IDL, with one major difference: if a=[0 1 2 3 4 5], in IDL a[1:4] = [1 2 3 4], but in Python, a[1:4] = [1 2 3]. In Python, when slicing array[i:j], it returns an array containing elements from i to j-1. Just like with strings, indices of arrays can be negative, in which case they count from the right instead of the left, i.e. a[-4:-1] = [2 3 4]. A : can also specify the rest of the elements or up to element, or all elements and arrays or lists can be used to subscript other arrays:
print a[:3] #[0 1 2]
print a[4:] #[4 5]
print a[:] #[0 1 2 3 4 5]
print a[[1,3,4]] #[1 3 4]
Note that slicing in python does not create a new array but just a pointer to the original array. b=a[0:10] followed by b[0] = 5 also changes a[0] to 5. To avoid this, use b = copy(a[0:10])
Array Operators:
Lists: a + b
For Lists, the + operator appends the list on the right (b) to the list on the left.
a = ["Roberson", "Walsh"]
b = ["Lee", "Humphrey"]
-> a+b = ["Roberson", "Walsh", "Lee", "Humphrey"]
Arrays: concatenate((a,b)[,axis])
For arrays, use the numarry function concatenate. It also allows you to specify the axis when concatenating multi-dimensional arrays.
b = arange(5)
print concatenate((b, arange(6)))
-> [0 1 2 3 4 0 1 2 3 4 5]
print concatenate((b,a),axis=1)
-> [[0 0 0 0]
[1 0 0 0]
[2 0 8 0]
[3 0 0 0]
[4 0 0 0]]
Equality: a == b and Inequality: a != b
For lists, these work the same as for scalars, meaning they can be used in if statments. For arrays, they return an array containing true or false for each array element.
Array Functions: All functions but len are for arrays only
len: returns the length of a list/array.
s = ["Lee", "Walsh", "Roberson", "Brewer"]
print len(s) #4
argmax([axis]): returns the index of the largest element in a 1D array or an array of the largest indices along the specified axis for a multi-dimensional array.
a = array([[1,6,9], [2,4,0], [7,4,8]])
print a.argmax(1)
-> [2 1 2]
argmin([axis]): returns the index of the smallest element in a 1D array or an array of the smallest ndices along the specified axis for a multi-dimensional array.
b = array([2,4,7,1,3,-1,5])
print b.argmin()
-> 5
argsort([axis]): returns an array of indices that allow access to the elements of the array in ascending order.
print b.argsort()
-> [5 3 0 4 1 6 2]
print b[b.argsort()]
-> [-1 1 2 3 4 5 7]
print a.argsort(1)
-> [[0 1 2]
[2 0 1]
[1 0 2]]
astype(type): returns a copy of the array converted to the specified type.
a = a.astype('Float64')
b = b.astype('Int32')
copy(): returns a copy of the array.
c = a[:,2].copy()
print c
-> [9 0 8]
diagonal(): for multi-dimensional arrays, returns the diagonal elements of the array, where the row and column indices are equal.
print a.diagonal()
-> [1 4 8]
info(): prints informations about the array which may be useful for debugging.
max(): returns the largest element in the array
print a.max()
-> 9
mean(): returns the average of all elements in an array
print a.mean()
-> 4.55555555556
min(): returns the smallest element in the array
print b.min()
-> -1
nelements(): returns the total number of elements in the array
print a.nelements()
-> 9
product(array [,axis]): returns the product of an array or an array of the products along an axis of an array.
print product(b)
-> -840
print product(a,1)
-> [ 54 0 224]
reshape(array, shape): function that changes the shape of an array. But the new shape must have the same size as the old shape, otherwise an error will occur.
c = reshape(a, 9)
a = reshape(c,(3,3))
resize(shape): shrinks/grows the array to a new shape. Can be called as a method (replaces old array) or a function. The new shape does not have to be the same size as the old shape. If it is smaller, values will be cut off, and it if is bigger, values will repeat.
print a
-> [1 6 9 2 4]
print a
-> [[1 6 9 2 4 0]
[7 4 8 1 6 9]]
c = resize(a,(2,2))
print c
-> [[1 6]
[9 2]]
shape(array): returns the dimensions of the array in a tuple
print shape(a), shape(b), shape(a)[0]*shape(a)[1]
-> (3,3) (7,) 9
sort(array [,axis]): returns an array containing a copy of the data in the array and the elements sorted in increasing order. In the case of a multi-dimensional array, the data will be sorted along one axis and not across the whole array.
print sort(b)
-> [-1 1 2 3 4 5 7]
print sort(a)
-> [[1 6 9]
[0 2 4]
[4 7 8]]
print sort(a,0)
-> [[1 4 0]
[2 4 8]
[7 6 9]]
stddev(): returns the std deviation of all elements in the array
print a.stddev()
-> 3.16666666667
sum(): Can be called as a method or a function. The behavior is identical for 1-d arrays. But for multi-dimensional arrays, calling as a method returns the sum of the entire array, whereas calling it as a function allows you to specify an axis and returns an array with the sums along that axis.
print a.sum()
-> 41
print sum(a)
-> [10 14 17]
print sum(a,1)
-> [16 6 19]
trace(): Returns the sum of the diagonal elements of an array
print a.trace()
-> 13
type(): returns a string containing the type of the array.
print a.type()
-> Int32
tolist(): returns a list containing the same data as the array.
c = a.tolist()
transpose(): Can be called as a method (replaces old array) or a function. Returns the transpose of the array.
b = transpose(a)
where(expr, 1, 0): Similar to the IDL where function. Returns an array of the same size and dimensions containing 1 if the condition is true and 0 if the condition is false. Any value may be substituted for 1 and 0, but they are the recommended values (i.e. true, false) so that compress can be used to extract values from the array: compress(mask_array, data_array).
c = where(b > 2, 1, 0)
print c
-> [0 1 1 0 1 0 1]
print compress(c,b)
-> [4 7 3 5]
c = where(a > 2, 1, 0)
print c
-> [[0 1 1]
[0 1 0]
[1 1 1]]
print compress(c,a)
-> [6 9 4 7 4 8]

if expr: statement
if expr: statement1
else: statement2
if-elseif: if expr: statement1
elif expr: statement2
else: statement3

Multiple elifs can be included in the same if statement. There is no switch or case statement so multiple elifs must be used instead. While parenthesis are not required around the expression, they can be used.

if a > b: print "a is greater than b";

if (a > b):
print "a is greater than b"
print "blocks are defined by indentation"
elif (a print "a is less than b"
print "a is equal to b"

for: for var in range(start [,stop [,inc]]): statements
Not unsimilar to IDL and basic, except for the range statement. var can be any variable. The range statement can take start and stop values, and an increment.
while: while expr: statements
Executes statements while the expression is true.
continue: continue
Skips the rest of the body of the loop for the current iteration and continue execution at the beginning of the next iteration.
break: break
Ends the execution of the current loop.
else: else
for and while loops can both have else clauses, which are executed after the loop terminates normally by falsifying the conditional, but else clauses are not executed when a loop terminates via a break statement.
foreach: for x in array: statements
Loops over the array given by array. On each iteration, the value of the current element is assigned to x and the internal array pointer is advanced by one.

for j in range(10): print "Value number " + str(j) +" is "+value[j]

for j in range(10,0,-2):
x = x + j
print x

while (b print "b is less than a."

for j in range(0,10):
while(k print "j = " + str(j) + " k = "+str(k)
if (j == 1): break
print "j equals k or j equals 1"

a = ["abc","def","ghi"]
for x in a:
print x

Definition: Functions in Python are defined with the following syntax:
def funct(arg_11, arg_2, ..., arg_n):
print "This is a function."
return value
Any Python code, including other function and class definitions, may appear inside a function. Functions may also be defined within a conditional, but in that case the function's definition must be processed prior to its being called. Python does not support function overloading but does support variable number of arguments, default arguments, and keyword arguments. Return types are not specified by functions.
Arguments: Function arguments are passed by value so that if you change the value of the argument within the function, it does not get changed outside of the function. If you want the function to be able to modify non-local variables, you must declare them as global in the first line of the function. Note that if you declare any variables as global, that name cannot be reused in the argument list, i.e. this would cause an error:
function double(x):
global x
x = x*2
Instead this could be done
function double(n):
n = n * 2
return n
x = double(x)
function doubleX():
global x
x = x * 2
Default Arguments: A function may define default values for arguments. The default must be a constant expression or array and any defaults should be on the right side of any non-default arguments.
def square(x = 5):
return x*x
If this function is called with square(), it will return 25. Otherwise, if it is called with square(n) , it will return n^2.
Variable length argument lists: Variable length arguments are supported by being wrapped up in a tuple. Before the variable number of arguments, zero or more normal arguments may occur:
def var_args(arg1, arg2, *args):
Keyword arguments: Functions can also be called using arguments of the form keyword = value:
def player(name, number, team="Florida"):
print(name + "wears number " + str(number) + "for " + team)
player("Matt Walsh", 44)
player(number = 44, name = "David Lee")
player("Anthony Roberson", number = 1)
player(name = "J.J. Redick", number = 4, team = "Duke")
Return: Values are returned from the function with the return command: return var. You can not return multiple values, but that can be achieved by returning an array or object. Return immediately ends execution of the function and passes control back to the line from which it was called.
Variable Functions: Python supports the concept of variable functions. That means that if a variable can point to a function instead of a value. Objects within a method can be called similarly.
def test():
print 'This is a test.'
var = test
var() #this calles test()
var = circle.setRadius
var(3) #this calls circle.setRadius(3)

Classes and OOP
Python supports OOP and classes to an extent, but is not a full OOP language. A class is a collection of variables and functions working with these variables. Classes are defined somewhat similarly to Java, but differences include self being used in place of this and constructors being named __init__ instead of classname. Also note that self must be used every time a class-wide variable is referenced and must be the first argument in each function's argument list, including the constructor. In addition, functions and constructors cannot be overloaded, but as discussed above, do support default arguments instead. Like functions, a class must be defined before it can be instantiated. In Python, all class members are public.
Initializing vars: Only constant initializers for class variables are allowed (n = 1). To initialize variables with non-constant values, you must use the constructor. You cannot declare unitialized variables.
Encapsulation: Python does not really support encapsulation because it does not support data hiding through private and protected members. However some pseudo-encapsulation can be done. If an identifier begins with a double underline, i.e. __a, then it can be referred to within the class itself as self.__a, but outside of the class, it is named instance._classname__a. Therefore, while it can prevent accidents, this pseudo-encapsulation cannot really protect data from hostile code.
Inheritence: Python allows classes to be extended (see right) by adding the base class name in parenthesis after the derived class name: class Derived(Base):. The child class takes all the variables and functions from the parent class and can extend that class by adding additional variables and adding or overriding functions. If class B extends class A, then A or B can be used anywhere an A is expected, but only B can be used where a B is expected because it contains additional information/methods not found in A. In addition, Python supports multiple inheritence: class Derived(Base1, Base2, Base3):
Abstract classes: Abstract classes and interfaces are not supported in Python. In Python, there is no difference between an abstract class and a concrete class. Abstract classes create a template for other classes to extend and use. Instances can not be created of abstract classes but they are very useful when working with several objects that share many characteristics. For instance, when creating a database of people, one could define the abstract class "Person", which would contain basic attributes and functions common to all people in the database. Then child classes such as "SinglePerson", "MarriedCouple", or "Athlete" could be created by extending "Person" and adding appropriate variables and functions. The database could then be told to expect every entry to be an object of type "Person" and thus any of the child classes would be a valid entry. In Python, you could create a class Person and extend it with the child classes listed above, but you could not prevent someone from instantiating the Person class.
Parent: The parent keyword is not supported by Python, but you can call methods from the base classes directly: BaseClass.method_name(self, arguments) (see right).
Constructors: Constructors are fuctions that are automatically called when you create a new instance of a class. They can be used for initialization purposes. A function is a constructor when it has the name __init__. When extending classes, if a new constructor is not defined, the constructor from the parent class is used (see right). When an object of type RectWithPerimeter is created, the constructor from Rectangle is called. If however, I were to add a function in RectWithPerimeter with the name __init__ , then that function would be used as its constructor.
Comparing Objects: Objects can be compared using the == and != operators. Two objects are equal only if they are the same instance of the same object. Even if two objects have the same attributes and values and are instances of the same class, they are not equal if the are separate instances.

Example Class:
class Rectangle:
#Optionally define variable width
width = 0
#Constructor with default arguments
def __init__(self, width = 0, height = 0):
self.width = width
self.height = height
def setWidth(self, width):
self.width = width
def setHeight(self, height):
self.height = height
def getArea(self):
return self.width * self.height

arect = Rectangle() #create a new Rectangle with dimensions 0x0.
print arect.getArea()
-> 24
rect2 = Rectangle(7,3) #new Rectangle with dimensions 7x3.

Extended Class:
class RectWithPerimeter(Rectangle):
#add new functions
def getPerimeter(self):
return 2*self.height + 2*self.width
def setDims(self, width, height):
#call base class methods from Rectangle
Rectangle.setWidth(self, width)
Rectangle.setHeight(self, height)
arect = RectWithPerimeter(6,5) #Uses the constructor from Rectangle because no new constructor is provided to override it.
print arect.getArea() #Uses the getArea function from Rectangle and prints 30.
print arect.getPerimeter() #Uses getPerimeter from RectWithPerimeter and prints 22.
arect.setDims(4,9) #Use setDims to change the dimensions.

File I/O
Opening Files: file open(string filename, string mode):
open can be used to open files for reading, writing, and appending. It binds a named file object to a stream that can then be used to read/write data. Possible modes include:
'r': Open for reading.
'w': Open for writing. Any existing data will be overwritten.
'a': Open for writing. New data will be appended to existing data.
'b': Use this flag when working with binary files (e.g. 'rb').
Checking Files: Python supports several methods of checking if a file exists and checking its properties:
bool os.access(string path, int mode): returns TRUE if the filename exists and matches the mode query. The mode query can be any of the following constants:
os.F_OK: test the existence of path
os.R_OK: tests if path exists and is readable
os.W_OK: tests if path exists and is writable
os.X_OK: tests if path exists and is executable
File Operations: Python also supports file operations such as renaming and deleting files. And of course any shell command can be excecuted via os.system.
bool os.system(string command): attempts to execute the supplied shell command and returns true if the command executed.
bool chmod(string path, int mode): Changes the permissions of path to mode. Mode should be defined as an octal (i.e. 0644 or 0777).
list listdir(string path): Returns a list containing all the files in the current directory. The special entries "." and ".." are not included.
bool mkdir(string pathname [, int mode]): Makes a directory pathname with permissions mode (e.g. mkdir('new_dir', 0700);)
bool remove(String filename): Deletes filename
bool rename(string oldname, string newname): Renames a file
bool symlink(string target, string link): Creates a symbolic link to the existing target with name link.
Reading Files: Files can be read by several methods.
string read([int length]): Reads up to a specified number of bytes from the file into a string. It will read until it encounters EOF or the specified length is reached (default is all data).
string readline([int length]): Reads one entire line from a file, or up to length bytes, into a string. Reading stops when length bytes have been read or a newline or EOF is reached. A trailing newline character is kept in the string (but may be absent on the last line of the file).
list readlines([int sizehint]): Reads from a file using readline() until EOF and returns a list containing the lines read. If sizehint is present, whole lines totaling approximately sizehint bytes are read.
EOF: end-of-file is reached when read or readline returns an empty string. while (s != ""):
s = f.readline()
Writing to files: Files that have been opened for writing with open can be written to by two methods.
void write(string string): Writes the contents of string to the file. Does not append a newline character to the string. Only strings can be written so other datatypes must be converted to strings.
void writelines(list data): Writes a list or array of strings to the file. Newlines will not be added between the elements of the list/array.
Concurrency: File locking is available through the flock method in the fcntl module. Though be warned, flock does not work reliably on all operating systems. Therefore you may want to develop your own semaphores instead. The syntax is: flock(fileDescriptor fd, int operation), where the file descriptor can be obtained by calling the fileno() method of a file object and operation can be LOCK_SH to acquire a shared lock (reader), LOCK_EX to acquire an exclusive lock (writer), LOCK_UN to release a lock, or LOCK_NB if you don't want flock to block while locking.
Serializing Objects: An object can be serialized with methods in the pickle module. This will create a string representation of the object that can be stored in a file and later reconstructed into the object. In this way, ints, floats, or any object can be written to a file in addition to strings. If the object is an instance of a class, that class must be defined or imported in the python program that unserializes the object (i.e. if you have an object of type A in a.py, serialize it, write it to a file, and on b.py you read it back in from the file, then class A must be defined in b.py or included via import a to unserialize the object. An easy solution is to put the definition of class A in a file to be imported in both a.py and b.py). Arrays can be serialized as well. If you have an object x, you can serialize it and save it to a file:
f = open("file.dat","wb")
It can then be unserialized and restored by:
f = open("file.dat","rb")
y = pickle.load(f)
Sockets: To use sockets in Python, import socket . A server socket can then be opened with:
mySocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mySocket.bind(('', 2727))
The first line creates a socket object. The second line binds the socket to an address. In this case, '' is a symbolic name meaning localhost and we select port 2727. The address parameter should be in the form of a tuple as shown above. Finally, the third line listens for connections made to a socket. The argument is the maximum number of queued connections. Now that a server socket is open, we need to be able to accept data:
conn, addr = mySocket.accept()
print 'Connected with ', addr
while True:
data = conn.recv(1024)
if not data: break
print data
conn.send("Data received")
The accept() method accepts a connection and returns a pair (conn, address), where conn is a new socket object usable to send and receive data on the connection and address is the address bound to the socket on the other end (client side) of the connection. We then enter a loop and receive data from the client using the recv(bufsize) method. recv returns a string of the data received with a maximum amount of data specified by bufsize. If data is false, we break out of the loop. Otherwise we print the data and use send(string) to send a message back to the client.
Now our server is complete, but we need a client-side socket:
cSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
The first line of course creates a socket object. The second line is similar to bind except that it connects to an existing server socket specified by the address. Note that ("localhost", 2727) would be another valid address. Now we need to send and receive data:
cSocket.send("Hello world!")
data = cSocket.recv(1024)
print data
send and recv work just the same as they do in the server socket. We send data to the server ("Hello world!"), receive the response ("Data received"), close the connection (which causes data to become false on the server program and terminate the loop), and print out the data.
file = open("data/teams.txt","rb")
team = "nonempty"
while (team != ""):
team = file.readline()
if (team != ""): print team[:-1] #get rid of extra newline character

file = open("data/teams.txt","rb")
team = file.readlines()

list = ["Florida","Clemson","Duke"]
file = open("data/teams.txt","wb")
for j in list: file.write(j+"\n")

import pickle, fcntl
player = Player("J.J. Redick", "Duke", 4)
file = open("data/players.txt", "a")
fcntl.flock(file.fileno(), fcntl.LOCK_EX)
pickle.dump(player, file)
fcntl.flock(file.fileno(), fcntl.LOCK_UN)

Images in Python
FITS files: Python supports FITS files via the module pyfits. Once this module has been imported, you can read and write FITS files. FITS files are read and stored in and HDUList object, which has two components: header and data. The header is a list-like object and data is usually an array. To read in a FITS file, use
HDUList open(string): open a filename
info(): print a summary of the objects in the file.
Note that FITS files can have what are called multiple extensions-- multiple images and/or headers in a single file. info will list all objects in the file, their name, type, cards (number of entries in the header), dimensions, and format (i.e., Int16 or Float32).

Now that you have a FITS object, you can access its header and data. Since each object within a file can have its own header and data, you would access the primary header as x[0].header and the data as x[0].data.

Headers:You can print the entire header by calling the x[0].header.ascardlist() method. You can access individual elements in the header directly by keyword (x[0].header['NAXIS1']) or by index (x[0].header[3]). If you know that a keyword is already present in the header, you can update its value using the same notation:
x[0].header['NAXIS1'] = 265
But if the keyword might not be present and you want to add it if it isn't, use the update() method instead:

Data: Since the data is an array, you can use any numarray methods on it. The data can thus be accessed using slice notation as well.
print shape(x[0].data)
print x[0].data[0:5,0:5]

Writing FITS Files: Once the data and header have been modified, you can write them back to a new FITS file using writeto(string). This writes to a new file and closes that file, but further operations can still be done on the data in memory. Note that if a file exists with the specified name, it will NOT be overwritten and an error will be raised. To close the input file, use x.close(). Examples:
x = pyfits.open("NGC3031.fits")

-> Filename: NGC3031.fits
No. Name Type Cards Dimensions Format
0 PRIMARY PrimaryHDU 6 (530, 530) UInt8

print x[0].header.ascardlist()

NAXIS1 = 530
NAXIS2 = 530
HISTORY Written by XV 3.10a

print x[0].header['NAXIS1']
-> 530
print x[0].header[3]
-> 530

print x[0].data[3,0:5]
-> [11 11 11 9 9]

x[0].data[3,0:3] = array([0,0,0])
print x[0].data[3,0:5]
-> [0 0 0 9 9]

x[0].data += 5 #using numarray to operate on entire array
print x[0].data[3,0:5]
-> [ 5 5 5 14 14]


Guess My Number
Here is the code for Guess My Number in Python, a program that generates a random number between 1 and 100 and asks the user to guess it. It will tell the user if the number is higher or lower after each guess and keep track of the number of guesses.

import random, math
x = math.floor(random.random()*100)+1
z = 0
b = 0
while x != z:
z = input("Guess My Number: ")
if z if z > x: print("Lower!")
print("Correct! " + str(b) + " tries.")

Python reference
Python.org: includes an introductory tutorial and a full manual.
Numarray homepage: includes a full manual about numarray features.

Python ASCII

print chr(ASCII)

Remove non-numbers characters from string

#! /usr/bin/python
import re
a = "10a"
b = "75q"
x = re.sub(r'\D',"", a)
y = re.sub(r'\D',"", b)
c = int(x)+int(y)
print c

Win32 Timer

SetTimer(hWnd, 1, 600, NULL); /Create the timer, (handle to window, ID#, time in ms, who to notify with message {NULL sends to WM_TIMER}).
/Create message handler
case WM_TIMER:
k = 1;
return 0;
Tags: code,c++

Win32 Virtual Keyboard

switch (wParam)
case VK_HOME:
/ Insert code here to process the HOME key
/ ...

case VK_END:
/ Insert code here to process the END key
/ ...

/ Insert code here to process the INS key
/ ...

case VK_F2:
/ Insert code here to process the F2 key
/ ...

case VK_LEFT:
/ Insert code here to process the LEFT ARROW key
/ ...

case VK_RIGHT:
/ Insert code here to process the RIGHT ARROW key
/ ...

case VK_UP:
/ Insert code here to process the UP ARROW key
/ ...

case VK_DOWN:
/ Insert code here to process the DOWN ARROW key
/ ...

/ Insert code here to process the DELETE key
/ ...

/ Insert code here to process other noncharacter keystrokes
/ ...

ncdu Linux file find tool

yum install ncdu
apt-get install ncdu
ncdu /etc
Tags: linux

Drupal Clean URL broke

/yoursite/?q=admin. $q=admin is the basic page path

Windows Password

When this policy setting is enabled, users must create strong passwords to meet the following minimum requirements:

Passwords cannot contain the user?s account name or parts of the user?s full name that exceed two consecutive characters.
Passwords must be at least six characters in length.
Passwords must contain characters from three of the following four categories:

English uppercase characters (A through Z).
English lowercase characters (a through z).
Base 10 digits (0 through 9).
Non-alphabetic characters (for example, !, $, #, %).

Mysql setup database and user

ON {Database name}.*
TO '{Username}'@'localhost' IDENTIFIED BY 'password!';
SET PASSWORD FOR '{Username}'@'{localhost}' = PASSWORD('{password}');
DELETE mysql.user WHERE User='{Username}';
DROP USER '{Username}'@'{localhost}';
'{Username}'=Database UserName
'{localhost}'=IP/Domain name
{Database name}=Database name
Tags: mysql

Unbuntu Firewall

apt-get update && apt-get install apache2 php5 libapache2-mod-php5 mysql-server libapache2-mod-auth-mysql php5-mysql && service mysql start && mysql_secure_install && service mysql restart phpmyadmin && service apache2 restart&& ufw allow 80/tcp
apt-get update && apt-get install apache2 php5 libapache2-mod-php5 mysql-server libapache2-mod-auth-mysql php5-mysql && service mysql start && mysql_secure_install && service mysql restart phpmyadmin && service apache2 restart&& ufw allow 80/tcp

Linux Troubleshooting tools

MySQL >> ps auxww | grep mysql
HTTP >> ps auxww | grep httpd
Dumps >> tcpdump -i eth1 'udp port 53'
connections >> ss
Show mem >> cat /proc/meminfo
Show CPU >> cat /proc/cpuinfo
ps auxf

Show apache data
ps -aylC httpd |grep "httpd" |awk '{print $8'} |sort -n |tail -n 1


iptables -I RH-Firewall-1-INPUT -s -p tcp -m tcp --dport 3306 -m comment --comment "MySQL Port" -j ACCEPT
iptables -I RH-Firewall-1-INPUT -p tcp --dport 22 -m comment --comment "SSH"-j ACCEPT
/sbin/service iptables save
Tags: linux

Passwords must meet complexity requirements

Passwords must meet complexity requirements
Updated: October 1, 2012

Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2, Windows Server 2008, Windows Server 2008 R2

Password must meet complexity requirements

This security setting determines whether passwords must meet complexity requirements. Complexity requirements are enforced when passwords are changed or created.

If this policy is enabled, passwords must meet the following minimum requirements when they are changed or created:

Passwords must not contain the user's entire samAccountName (Account Name) value or entire displayName (Full Name) value. Both checks are not case sensitive:

The samAccountName is checked in its entirety only to determine whether it is part of the password. If the samAccountName is less than three characters long, this check is skipped.

The displayName is parsed for delimiters: commas, periods, dashes or hyphens, underscores, spaces, pound signs, and tabs. If any of these delimiters are found, the displayName is split and all parsed sections (tokens) are confirmed not to be included in the password. Tokens that are less than three characters in length are ignored, and substrings of the tokens are not checked. For example, the name "Erin M. Hagens" is split into three tokens: "Erin," "M," and "Hagens." Because the second token is only one character long, it is ignored. Therefore, this user could not have a password that included either "erin" or "hagens" as a substring anywhere in the password.

Passwords must contain characters from three of the following five categories:

Uppercase characters of European languages (A through Z, with diacritic marks, Greek and Cyrillic characters)

Lowercase characters of European languages (a through z, sharp-s, with diacritic marks, Greek and Cyrillic characters)

Base 10 digits (0 through 9)

Nonalphanumeric characters: ~!@#$%^&*_-+=`|\(){}[]:;"',.?/

Any Unicode character that is categorized as an alphabetic character but is not uppercase or lowercase. This includes Unicode characters from Asian languages.
Tags: windows

Microsoft SQL Allow Remote Access

Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager)
SQL Server 2012

This topic describes how to configure an instance of the SQL Server Database Engine to listen on a specific fixed port by using the SQL Server Configuration Manager. If enabled, the default instance of the SQL Server Database Engine listens on TCP port 1433. Named instances of the Database Engine and SQL Server Compact are configured for dynamic ports. This means they select an available port when the SQL Server service is started. When you are connecting to a named instance through a firewall, configure the Database Engine to listen on a specific port, so that the appropriate port can be opened in the firewall.
For more information about the default Windows firewall settings, and a description of the TCP ports that affect the Database Engine, Analysis Services, Reporting Services, and Integration Services, see Configure the Windows Firewall to Allow SQL Server Access.
Tip Tip

When selecting a port number, consult /www.iana.org/assignments/port-numbers for a list of port numbers that are assigned to specific applications. Select an unassigned port number. For more information, see The default dynamic port range for TCP/IP has changed in Windows Vista and in Windows Server 2008.
In This Topic

To configure a server to listen on a specific TCP port, using:

SQL Server Configuration Manager
Using SQL Server Configuration Manager
To assign a TCP/IP port number to the SQL Server Database Engine

In SQL Server Configuration Manager, in the console pane, expand SQL Server Network Configuration, expand Protocols for , and then double-click TCP/IP.
In the TCP/IP Properties dialog box, on the IP Addresses tab, several IP addresses appear in the format IP1, IP2, up to IPAll. One of these is for the IP address of the loopback adapter, Additional IP addresses appear for each IP Address on the computer. Right-click each address, and then click Properties to identify the IP address that you want to configure.
If the TCP Dynamic Ports dialog box contains 0, indicating the Database Engine is listening on dynamic ports, delete the 0.
In the IPn Properties area box, in the TCP Port box, type the port number you want this IP address to listen on, and then click OK.
In the console pane, click SQL Server Services.
In the details pane, right-click SQL Server () and then click Restart, to stop and restart SQL Server.

After you have configured SQL Server to listen on a specific port, there are three ways to connect to a specific port with a client application:

Run the SQL Server Browser service on the server to connect to the Database Engine instance by name.
Create an alias on the client, specifying the port number.
Program the client to connect using a custom connection string.