download and dpkg -i *.deb
Git is not about collaboration only
Well, as to me, times when it was okay to jump into production server and hack-hack-hack code there are mostly gone away, at least when we talk about revenue generating projects.
Just imagine – somebody would need to help or replace your current coder on this project, says he’s busy with another project now. With revision control system, specially if properly used (I mean if the commits get real comment describing what’s done, no giant pile commits like “yesterday I changed it all or at least it looks like all lines were changed”) with integration to issue tracking, so that it’s possible to open a ticket and found all related code changes, what impact do you imagine can this have?
As to me, it’s self-evident.
Forgotten to say and it’s exactly about forgetting – people tend to forget. Even author can forget what he did, why he changed those lines and so on. Revision control is of a great help here. 3 months later w/o quick going through history of commits you have more chances to loose even more time and make mistakes.
Git isn’t about collaboration only. It’s about development in general.
4.0.5 BFS-UltraKSM-BFQ packed in .deb
download and dpkg -i *.deb
Do you comment your interactive shell?
Another shell trick I’d like to share is commenting. Why one would need this while in interactive mode? Well, real world isn’t perfect, and sometimes you might need to log into some remote system using its new IP-address – say, during service migration from old to new one. DNS hasn’t been updated yet, it’s too early. So, what are you up to — remembering all those digits? Forget it! ;) (You will anyway) :)
Long story short – with bash
it’s really easy, just put #
after command in
question and that’s it. But zsh
won’t let you go this way. So, it’s time to
recall we have something more universal — a colon: :
. In fact it stands for
“no operation”, but it’s useful anyways:
1 2 3 4 |
|
Should you need to repeat the command, Ctrl_R
and interactive search for
‘Host we migrate to’ would retrieve it for you both in zsh
and bash
.
Linux kernel 4.0.5 is out bringing severe error fixes
Recently baked 4.0.5 has fixes for some really nasty bugs. If you had been ever wondering why you should update kernel with minor releases, an example is awaiting:
libata
: Blacklist queued TRIM on all Samsung 800-series- FSes and co.:
fs/dcache.c
d_walk() might skip too muchvfs
: read file_handle only once in handle_to_pathBtrfs
: fix racy system chunk allocation when setting block group roext4
:- fix NULL pointer dereference when journal restart fails
- fix
lazytime
optimization (timestamp would get written to the wrong inode. )
xfs
: xfs_attr_inactive leaves inconsistent attr fork state behind
- Linux Software RAID (widely known after its userspace control utility as
mdadm
)md
: fix race when unfreezing sync_action
dm
(device mapper subsystem)- task scheduler
pty
: Fix input race when closinglib
: Fix strnlen_user() to not touch memory after specified maximum
so I’m certainly rebasing my own builds ASAP.
4.0.4 brain-fuck-scheduled (BFS) kernel debs are here too
Well, since earlier I have had some difficulties with patches mastered by Alfred Chen (they’re incorporated into pf-kernel now by default, BTW), BFS rebasing to 4.0.4 was slightly delayed not in the lesser degree by 4.0-realtime release ;-) But finally I become curious enough to find the root of boot freeze and gave it a compile session too. – So, yeah, w/o those patches all’s cool and shiny too. Grab’n’deploy! ;-) In case you’re wondering why would ya need it – BFS is known to be better than standard CFS for typical desktop systems and workloads.
.deb of 4.0.4-rt Real-time+UltraKSM+BFQ kernel available
To my surprise there was 4-RT release made, so after rebasing to 4.0.4 and
adding UltraKSM and BFQ I compiled it and am enjoying the result. In case you’d
like to give it a try, it’s quite easy for .deb-based distros. Grab the
packages, then just do dpkg -i *.deb
. No warranty, use at your own risk (as
usual). Let me know if you enjoyed it too.
Is there synproxy for CentOS 6 kernel?
I’m not sure but I think I heard about the concept of “SYN proxy” from OpenBSD’
pf originally. As you
can notice there’s recommendation to use it carefully and the main purpose is
SYN-flood DDoS mitigation. Well, when it comes to Linux it looks like only
CentOS 7 with its 3.10 based kernel has SYNPROXY
target built-in,
meanwhile CentOS 6 based on 2.6.32 kernel version lacks of it. OTOH, from
sources it doesn’t look like backport would be too complex to accomplish.
There’s also an LWN article on SYNPROXY target
Linux netfilter aka iptables sucks at logging
Actually I really like Netfilter. But when it comes to logging I have to admit
– it sucks rather boundless. Why is that? Let me explain. The only way to get
Netfilter logging is using a logging “target” and it has several “targets” suitable for
that. For simplicity we can stick to target LOG
– others logging purpose
targets aren’t any different at the issue we’re talking about. So, the issue
comes from the fact that by design you can have only one target per
Netfilter’s rule. Basic targets are ACCEPT
and DROP
. So are you starting
to get it? – Right, if you want to log some packet you’re discarding you have to postpone with
the discard: LOG
it first, then decide the fate of the packet. Thus if
you want to LOG and DROP you need to repeat the same rule with different
“targets”, say:
1 2 |
|
– this effectively duplicates number of rules for one task and thus clutters
the firewall. What if you want to change criteria – your abuser’s host IP
changed from 1.2.3.4
to 1.2.3.5
, – right, – now you need to change source IP
twice on the both lines. This is, BTW, something related to so-called “database
normalization”, you’d better take a look at what is it until it’s too late and
your DB sucks even more that Netfilter’s logging.
Netfilter’s FAQ rather naively considers this is not a big deal. They suggest
using combined, user-created targets. Yeah, BTW, this is the thing I like in
Netfilter too. How can we (naively) try to overcome this shortcoming? – We
create own target, say, logdrop
and use it this way:
1 2 3 |
|
Now we can log the abuser attempt and drop his malicious intents with one line:
1
|
|
Cool? Well, not that really. The thing is you often want to have different
parameters for logging. Say, sometimes, for one cases, you want to use limit
thus limitting logging to 3 times per second (but drop every packet, of
course). Sometimes you really need to have different log prefix so then later
you can distinguish between different kind of accidents when you’re analyzing
the log files. So it means now, if we’re following Netfilter guys’ advice, we
have to create lots of different logdrop target. Say, Log3persecDrop
,
LogPrefixALERT-portscanDrop
, LogReject
(yeah, you can not only silently DROP packets,
but REJECT
them too). – Cluttering and mess again. How do others firewalls cope with
it? For e. g., FreeBSD’s ipfw allows every rule has log
option – it’s an
option – “to log”, not a replacement for allowing packet to pass or denying it
to do so.
The less we know
We’re often not that eager to know the tools we got used to. less
is a good example for that. Did you know(?) it can:
- show/unshow line numbers:
-N
- select position you want to see using percentage – say, let’s jump to ⅓ (33 %) of the log:
33p
- chop long line (say “goodbye!” to MySQL’s
\G
when output is wide and you usedpager less
):-S
- squeeze multiple blank lines so you have less pages to deal with:
-s
- Display only (non-)matching lines (say “bye-bye!” to
fgrep foo bar | less
):&
- pipe the whole file or a part of it through filter you’ve chosen:
|
- run editor for the file in question:
v
Do yourself a favor and just press h
when in less
– I bet you won’t regret that now you’d know less. ;-)