Fast Backup & Migration with rsync, scp & PostgreSQL
Backup & Migration Cookbook
This guide consolidates common Linux backup & migration commands into clean, copy‑pasteable snippets with all flags explained.
All sensitive details from your original notes have been obfuscated with placeholders you can safely publish.
Disclaimer
- Always test on non‑production first.
- Replace placeholders with your own values.
- Read each command before running. Some operations (like
DROP DATABASE
) are destructive.
Redaction Key {#redaction-key}
Placeholder | Meaning |
---|---|
<SRC_DEV> | Source block device (e.g., /dev/md2 ) |
<MNT> | Mount point (e.g., /mnt ) |
<HOST_A> / <HOST_B> / <HOST_DB> | Remote host/IP |
<SSH_PORT> | SSH port (e.g., 23 ) |
<USER_X> | Remote username (e.g., backup provider login) |
<PASSWORD_X> | Password/token used in examples |
<FTP_HOST> | FTP/backup hostname |
<LOCAL_DIR> | Local directory path |
<REMOTE_DIR> | Remote directory path |
<DBNAME> | PostgreSQL database name |
<DBUSER> | PostgreSQL database user |
<DBHOST> | PostgreSQL host |
<DBPORT> | PostgreSQL port (e.g., 5432 ) |
<FILE> | File name (e.g., backup.tar.gz ) |
1) Mount & Chroot (rescue workflows)
# Mount target system
mount <SRC_DEV> <MNT>
mount --bind /dev <MNT>/dev
mount --bind /proc <MNT>/proc
mount --bind /sys <MNT>/sys
# Enter chroot
chroot <MNT>
Flags explained
--bind
— Bind‑mounts a directory into the chroot, so/dev
,/proc
,/sys
are available inside.chroot <dir>
— Switches the process root to<dir>
, simulating a booted system.
2) Mount a remote FTP space with curlftpfs
# Mount FTP (obfuscated credentials)
curlftpfs ftp://<USER_X>:<PASSWORD_X>@<FTP_HOST>/ /media/ftp
# Optional: another mount (external space)
curlftpfs ftp://<USER_Y>:<PASSWORD_Y>@<FTP_HOST>/ /media/ftp_ext
Flags explained
curlftpfs
uses libcurl to mount FTP/FTPS as a filesystem.
Tip: consider-o allow_other,uid=$(id -u),gid=$(id -g)
if multiple users need access.
3) Copying data with rsync
(preferred) & scp
rsync (SSH transport, verbose, resumable)
# Push to remote over a non-default SSH port with exclusions
rsync -arvP -e "ssh -p <SSH_PORT>" --exclude='lost+found' <LOCAL_DIR>/ <USER_X>@<HOST_A>:<REMOTE_DIR>/
# Push to a different host
rsync -arvP <LOCAL_DIR>/ root@<HOST_B>:/home/backup_project/
scp (simple copy over SSH)
# Recursive copy preserving times/permissions over a custom port
scp -r -p<SSH_PORT> <LOCAL_DIR> <USER_X>@<HOST_A>:<REMOTE_DIR>/
Flags explained
rsync
-a
— Archive mode (recursive, preserves perms, times, symlinks, devices).-r
— Recursive (redundant with-a
, kept for readability).-v
— Verbose.-P
— Combines--partial
(keep partial files) and--progress
(show progress).-e "ssh -p <SSH_PORT>"
— Use SSH with a custom port.--exclude='lost+found'
— Skip matching paths.
scp
-r
— Recursive copy.-p<SSH_PORT>
— Use custom SSH port.- (Implied) preserves times and modes when copying files to/from local (use
-p
alone to only preserve times/modes).
4) Tar & Zip basics
# Create a gzipped tarball
tar -czvf <FILE>.tar.gz <LOCAL_DIR>
# Extract a tarball
tar -xf <FILE>.tar.gz
Flags explained
-c
— Create archive.-z
— Filter throughgzip
.-v
— Verbose.-f
— Archive file name follows.-x
— Extract archive.
5) PostgreSQL: classic dump/restore (pg_dump
+ gzip
)
# Create a compressed dump
sudo -u postgres pg_dump -v <DBNAME> | gzip > <DBNAME>.sql.gz
# Restore from gzip dump (psql will run SQL)
gunzip -c <DBNAME>.sql.gz | sudo -u postgres psql -d <DBNAME>
Flags explained
pg_dump
-v
— Verbose progress.<DBNAME>
— Source database.
gzip/gunzip
gzip > file.gz
— Compress stream to file.gunzip -c
— Decompress to stdout (keeps original file).
6) PostgreSQL: fast, parallel dump/restore (directory format + pigz)
For very large DBs (especially with pre‑compressed payloads like bytea
images), avoid pg_dump
compression and compress after with pigz
(parallel gzip).
# Parallel directory-format dump (no compression)
sudo -u postgres pg_dump -Z0 -j 10 -Fd <DBNAME> -f <DBNAME>
# Tar and compress with pigz (multi-core)
tar -cf - <DBNAME> | pigz > <DBNAME>.tar.gz
rm -rf <DBNAME>
# Transfer the archive
scp <DBNAME>.tar.gz root@<HOST_DB>:/home/<USER_X>/
# On the target: unpack and parallel restore
pigz -dc <DBNAME>.tar.gz | tar -C <DBNAME> --strip-components 1 -xf -
sudo -u postgres pg_restore -j 10 -Fd -d <DBNAME> <DBNAME>
Flags explained
pg_dump
-Z0
— Disable built‑in compression (faster when data is already compressed).-j 10
— Use 10 parallel jobs (tune to your CPU/IO).-Fd
— Directory format (enables parallelism).-f <dir>
— Output directory.
tar/pigz
tar -c
— Create archive to stdout (-f -
).pigz
— Parallel gzip.-d
to decompress,-c
to stdout.
pg_restore
-j 10
— Parallel restore workers.-Fd
— Input is a directory-format dump.-d <DBNAME>
— Target database (must exist unless--create
).
7) PostgreSQL: create/drop DB & roles
# Create a role and database
sudo -u postgres createuser -D -P <DBUSER>
sudo -u postgres createdb -O <DBUSER> <DBNAME>
# Drop a database (destructive)
sudo -u postgres psql -c "DROP DATABASE <DBNAME>"
# Restore with pg_restore (custom tar/dir formats)
sudo -u postgres pg_restore --create --exit-on-error --verbose --dbname=<DBNAME> <INPUT_ARCHIVE_OR_DIR>
Flags explained
createuser
-D
— Not a superuser (no CREATE DB by default).-P
— Prompt for password.
createdb
-O <owner>
— Set DB owner.
pg_restore
--create
— Create the database before restoring into it.--exit-on-error
— Stop on first error.--verbose
— Show detailed progress.--dbname=<DBNAME>
— Database to connect to for running pre‑restore commands.
Debian/Ubuntu note: You may see
--cluster 13/main
in examples. That is a pg_wrapper extension on Debian‑based systems, not a nativepsql
/pg_dump
flag. It selects the running PostgreSQL instance by version/cluster.
8) PostgreSQL: psql
& maintenance
# Run a SQL file against a DB
sudo -u postgres psql -e -U <DBUSER> -W -h <DBHOST> -p <DBPORT> -d <DBNAME> -f <FILE>.sql
# Analyze & vacuum all DBs (optimize planner stats)
vacuumdb -a -z
Flags explained
psql
-e
— Echo commands sent to server.-U <user>
— DB user.-W
— Force password prompt.-h <host>
— Server host.-p <port>
— Server port.-d <dbname>
— Database name.-f <file>
— Execute commands from file.
vacuumdb
-a
— All databases.-z
— Analyze (update statistics).
9) Using scp
& rsync
with archives and config
# Copy multiple archives
scp -p<SSH_PORT> <FILE1>.gz <FILE2>.tar.gz <USER_X>@<HOST_A>:/home/<USER_X>/
# Sync entire trees
rsync -arvP <LOCAL_DIR>/ root@<HOST_A>:/home/sites/
Tip: You can pin a per-repo SSH key for Git:
git config --add --local core.sshCommand 'ssh -i /path/to/id_rsa'
10) Quick psql
connections
# Connect to a remote DB
psql -p <DBPORT> -U <DBUSER> --password -h <DBHOST>
# Connect to a legacy host/DB
psql -p <DBPORT> -U <DBUSER> --password -h <LEGACY_HOST> <DBNAME>
11) Handy one‑liners
# Show DB sizes (human readable)
sudo -u postgres psql -c "SELECT datname, pg_size_pretty(pg_database_size(datname)) FROM pg_database;"
# Restore one table from a dump (if available as separate object)
sudo -u postgres pg_restore -t <SCHEMA>.<TABLE> -d <DBNAME> <DUMPFILE_OR_DIR>
Flags explained
pg_restore
-t <name>
— Restore only a specific table (can be schema‑qualified).
Appendix: Flag Reference Tables
rsync
Flag | Meaning |
---|---|
-a | Archive (recursive + preserve attributes) |
-r | Recursive |
-v | Verbose |
-P | --partial + --progress |
-e "ssh -p N" | Use SSH transport with port N |
--exclude=PATTERN | Skip matching files/dirs |
scp
Flag | Meaning |
---|---|
-r | Recursive copy |
-pN | Use SSH port N (or -P N on some scp versions) |
tar
Flag | Meaning |
---|---|
-c | Create archive |
-x | Extract archive |
-z | gzip filter |
-v | Verbose |
-f FILE | Archive file name |
pg_dump
Flag | Meaning |
---|---|
-v | Verbose |
-Z0 | No compression |
-j N | N parallel jobs (with -Fd ) |
-Fd | Directory‑format dump |
-f DIR | Output directory |
pg_restore
Flag | Meaning |
---|---|
-j N | N parallel jobs |
--create | Create DB before restore |
--exit-on-error | Stop on first error |
--verbose | Verbose output |
-Fd | Input is directory‑format dump |
-d DB | Target DB name |
-t NAME | Restore selected table only |
psql
Flag | Meaning |
---|---|
-e | Echo input lines |
-U USER | DB user |
-W | Prompt for password |
-h HOST | DB host |
-p PORT | DB port |
-d DB | Database |
-f FILE | Execute file |
vacuumdb
Flag | Meaning |
---|---|
-a | All databases |
-z | Analyze statistics |
createuser / createdb
Command | Flag | Meaning |
---|---|---|
createuser | -D | Not a superuser |
createuser | -P | Prompt for password |
createdb | -O USER | Set DB owner |
Final checks before publishing
- Replace every placeholder.
- If you use Debian/Ubuntu with multiple clusters, prefer
pg_lsclusters
/pg_ctlcluster
and the--cluster
selector via pg_wrapper. - On large DBs, benchmark both classic
pg_dump | gzip
and parallel dir‑format + pigz; choose the fastest for your hardware.
Happy migrating! 🚀
Comments
Post a Comment