Automating Repetitive Dev Tasks with Scripts in 2026
April 13, 2026 · Developer Productivity, Automation, Scripting
Repetitive dev tasks steal focus. Whether it’s formatting JSON, generating UUIDs, or batch-encoding URLs, these chores add up to hours per week. The fix is automation: small scripts that run locally, in CI, or on git hooks. This guide shows practical, low-friction scripts you can copy and adapt in 2026. The goal isn’t a sprawling build system. It’s a handful of reliable scripts you’ll actually use.
Why scripts beat manual workflows
Automation removes decision fatigue and reduces mistakes. A script can run the same steps every time, output a consistent format, and chain multiple tools in a single command. It also serves as documentation: six months from now, your script explains exactly how you validated, transformed, or generated assets.
- Consistency: every run produces the same output
- Speed: one command replaces a multi-step checklist
- Reliability: fewer copy/paste errors
- Portability: teammates can run the same workflow
What to automate first (high ROI tasks)
Start with small, frequent tasks. These are the best candidates:
- Formatting and validating JSON
- Generating UUIDs for IDs and test data
- Base64 encoding/decoding for tokens, fixtures, or assets
- URL encoding/decoding for API testing and logs
- Regex testing for log parsing or input validation
- Batch renaming files and creating boilerplate
Script style guide (practical defaults)
Keep scripts portable and easy to run:
- Use bash for simple orchestration
- Use Node.js or Python 3.12 for data transformations
- Write scripts to accept stdin and emit stdout
- Include usage and a --help flag
- Exit with non-zero status on failure
Example 1: One-command JSON cleanup + validation
This script formats JSON files and validates them. It’s ideal for API fixtures and config files.
#!/usr/bin/env bash
set -euo pipefail
if [[ $# -lt 1 ]]; then
echo "Usage: ./json-clean.sh <file1.json> [file2.json ...]"
exit 1
fi
for file in "$@"; do
node -e "
const fs = require('fs');
const path = process.argv[1];
const raw = fs.readFileSync(path, 'utf8');
const parsed = JSON.parse(raw);
const pretty = JSON.stringify(parsed, null, 2) + '\n';
fs.writeFileSync(path, pretty, 'utf8');
" "$file"
echo "✔ formatted: $file"
done
Quick check or manual fallback: use the JSON Formatter to validate and format ad-hoc payloads.
Example 2: Generate UUIDs for test fixtures
When you need stable IDs for test data, a one-liner beats hand-typing.
#!/usr/bin/env node
import { randomUUID } from 'node:crypto';
const count = Number(process.argv[2] || 5);
for (let i = 0; i < count; i++) {
console.log(randomUUID());
}
Save it as uuid.js, then run node uuid.js 10 to print 10 UUIDs. For quick copy/paste, the UUID Generator is handy.
Example 3: Batch URL encode/decode
APIs often require encoded query strings, and log lines frequently need decoding. This Python script handles both.
#!/usr/bin/env python3
import sys
import urllib.parse
if len(sys.argv) < 3:
print("Usage: url_codec.py encode|decode 'string'")
sys.exit(1)
mode = sys.argv[1]
value = sys.argv[2]
if mode == "encode":
print(urllib.parse.quote(value, safe=""))
elif mode == "decode":
print(urllib.parse.unquote(value))
else:
print("Mode must be encode or decode")
sys.exit(1)
When you need a visual check or single value, use the URL Encoder/Decoder.
Example 4: Base64 fixtures for tests
Base64 is everywhere: auth headers, binary fixtures, signed payloads. This script encodes or decodes.
#!/usr/bin/env node
const fs = require('fs');
const [mode, input] = process.argv.slice(2);
if (!mode || !input) {
console.error('Usage: base64.js encode|decode "string"');
process.exit(1);
}
if (mode === 'encode') {
console.log(Buffer.from(input, 'utf8').toString('base64'));
} else if (mode === 'decode') {
console.log(Buffer.from(input, 'base64').toString('utf8'));
} else {
console.error('Mode must be encode or decode');
process.exit(1);
}
For quick spot checks, use the Base64 Encoder/Decoder.
Example 5: Regex-driven log parsing
Need to extract data from logs? A small script can do it.
#!/usr/bin/env python3
import re
import sys
pattern = re.compile(r"status=(\d{3}).*?path=([^\s]+)")
for line in sys.stdin:
m = pattern.search(line)
if m:
status, path = m.group(1), m.group(2)
print(f"{status}\t{path}")
Test and refine the regex using the Regex Tester before you bake it into the script.
Leveling up: Task runners and hooks
Once your scripts exist, use a task runner or git hooks to make them routine.
NPM scripts for consistency
{
"scripts": {
"format:json": "bash scripts/json-clean.sh data/*.json",
"uuid:gen": "node scripts/uuid.js 10",
"url:encode": "python3 scripts/url_codec.py encode",
"base64:decode": "node scripts/base64.js decode"
}
}
Pre-commit hooks to prevent mistakes
Use a simple hook to block malformed JSON from reaching your repo.
#!/usr/bin/env bash
set -euo pipefail
CHANGED=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.json$' || true)
if [[ -z "$CHANGED" ]]; then
exit 0
fi
for file in $CHANGED; do
node -e "JSON.parse(require('fs').readFileSync('$file','utf8'))"
echo "✔ valid JSON: $file"
done
This adds a few seconds to commit time but saves hours of debugging.
Putting it together: A daily automation kit
Here’s a minimal toolkit you can keep in a scripts/ folder:
- json-clean.sh — format and validate JSON fixtures
- uuid.js — generate predictable IDs for tests
- url_codec.py — encode/decode URLs for API tests
- base64.js — encode/decode payloads and secrets
- regex-parse.py — extract data from logs
Tips to keep scripts maintainable
- Document inputs/outputs: use a short usage string
- Keep scope small: one script per task
- Prefer stdin/stdout: makes scripts composable
- Version scripts with your repo: scripts evolve with your project
- Add tests for scripts: even one test case catches regressions
Practical example: Automating API test data creation
Imagine you need fresh test data daily. You can combine scripts into a single workflow:
#!/usr/bin/env bash
set -euo pipefail
ID=$(node scripts/uuid.js 1)
NOW=$(date -u +%Y-%m-%dT%H:%M:%SZ)
cat <<EOF > data/request.json
{
"id": "$ID",
"createdAt": "$NOW",
"name": "Sample User",
"email": "sample@example.com"
}
EOF
bash scripts/json-clean.sh data/request.json
Pair this with the JSON Formatter for quick inspections and you’ve got a dependable pipeline in minutes.
Common pitfalls to avoid
- Over-automation: if a task happens once a month, keep it manual
- Hidden dependencies: document required versions (Node 20+, Python 3.12)
- Silent failures: always log what the script did
- Hard-coded paths: make scripts path-agnostic or accept flags
Automation checklist
- Can I run it with one command?
- Does it fail fast with clear errors?
- Is output deterministic?
- Can a teammate understand it in 60 seconds?