Početak›Forumi›Linuks›Linuks umrežavanje›traffic shaping – bandwidth limit (lyb, popeye javite se)
- This topic has 8 odgovora, 3 glasa, and was last updated 17 years, 9 months ranije by LYb.
-
AutorČlanci
-
5. jun 2006. u 9:14 am #7449MisterNoUčesnik
Bilo je nedavno reci o kontroli bandwitha sa racunara koji deli internet konekciju na ostalim hostovima u mrezi. Naravno squid je super stvar i koristio sam ga, ali sada to hocu da uradim bez proxija. Probao sam da uradim sa shorewall-om koji moze da koristi skriptu “tc start” koja se oslanja na tc naredbu, a ima i intregrisanu u sebi i svoju neku internu varijantu. Kada sam konfigurisao shorewall da recimo sa moje lan kartice prema nekom hostu u mrezi limitira celokupno protok na 128kbit on pri startu daje normalan protok, ali pri prvom kopiranju recimo pocinje da limitira. Pri tome mi se pri kopiranju progress bar u mc-u ode na 100 % i onda zvrji i ceka dok se kopiranje ne zavrsi. Pojednostavljeno receno radi krajnje problematicno.
Elem hteo bih ovu stvar da odradim preko cbq skripte pa da vidim da li je to neko radio i da li ima iskustva sa tim. Ja sam nesto probao, ali mi sama cbq.init skripta nije radila kako treba, plus mi je javljala neke warninge za promenjivu maxdept ili tako nesto. A plus mi je potrebno da ogranicim samo internet band width, a da pri tome lan funkcionise normalno.
5. jun 2006. u 4:18 pm #44881popeyeGlavni majstorKoliko sam te shvatio, trebao bi da ograničavaš protok samo ka spoljnim adresama, a ne generalno. Jesi li pročitao, recimo, http://www.szabilinux.hu/bandwidth/ ? Šta ti pravi problem?
CBQ i dalje koristi označene pakete prema portu i adresi. Za pravo filtiranje po protokolu, koje se ne bi dalo zaobići menjanjem portova, bi trebao koristiti zakrpe za jezgro i iptables koje sam pominjao ranije. Ja ih nisam koristio i nemam praktična iskustva, tako da bi bilo lepo da ti to probaš. 😀
13. jun 2006. u 3:12 am #44882LYbUčesnikSvojevremeno sam izbunario ovu skriptu i od tada je dosta modifikovao, tako da nije ista kao originalna, ali moze da ti da dovoljno za pocetak… Ipak treba da proucis kako funkcionise markiranje paketa i kako se ponasaju i sta rade razna “queuing disciplines”… evo sta ja koristim na ADSL 256/64:
————————————————————————————————-
#!/bin/bash
#
# myshaper – DSL/Cable modem outbound traffic shaper and prioritizer.
# Based on the ADSL/Cable wondershaper (http://www.lartc.org)
#
# Written by Dan Singletary (8/7/02)
#
# NOTE!! – This script assumes your kernel has been patched with the
# appropriate HTB queue and IMQ patches available here:
# (subnote: future kernels may not require patching)
#
# http://luxik.cdi.cz/~devik/qos/htb/
# http://luxik.cdi.cz/~patrick/imq/
#
# Configuration options for myshaper:
# DEV – set to ethX that connects to DSL/Cable Modem
# RATEUP – set this to slightly lower than your
# outbound bandwidth on the DSL/Cable Modem.
# I have a 1500/128 DSL line and setting
# RATEUP=90 works well for my 128kbps upstream.
# However, your mileage may vary.
# RATEDN – set this to slightly lower than your
# inbound bandwidth on the DSL/Cable Modem.
#
#
# Theory on using imq to “shape” inbound traffic:
#
# It’s impossible to directly limit the rate of data that will
# be sent to you by other hosts on the internet. In order to shape
# the inbound traffic rate, we have to rely on the congestion avoidance
# algorithms in TCP. Because of this, WE CAN ONLY ATTEMPT TO SHAPE
# INBOUND TRAFFIC ON TCP CONNECTIONS. This means that any traffic that
# is not tcp should be placed in the high-prio class, since dropping
# a non-tcp packet will most likely result in a retransmit which will
# do nothing but unnecessarily consume bandwidth.
# We attempt to shape inbound TCP traffic by dropping tcp packets
# when they overflow the HTB queue which will only pass them on at
# a certain rate (RATEDN) which is slightly lower than the actual
# capability of the inbound device. By dropping TCP packets that
# are over-rate, we are simulating the same packets getting dropped
# due to a queue-overflow on our ISP’s side. The advantage of this
# is that our ISP’s queue will never fill because TCP will slow it’s
# transmission rate in response to the dropped packets in the assumption
# that it has filled the ISP’s queue, when in reality it has not.
# The advantage of using a priority-based queuing discipline is
# that we can specifically choose NOT to drop certain types of packets
# that we place in the higher priority buckets (ssh, telnet, etc). This
# is because packets will always be dequeued from the lowest priority class
# with the stipulation that packets will still be dequeued from every
# class fairly at a minimum rate (in this script, each bucket will deliver
# at least it’s fair share of 1/7 of the bandwidth).
#
# Reiterating main points:
# * Dropping a tcp packet on a connection will lead to a slower rate
# of reception for that connection due to the congestion avoidance algorithm.
# * We gain nothing from dropping non-TCP packets. In fact, if they
# were important they would probably be retransmitted anyways so we want to
# try to never drop these packets. This means that saturated TCP connections
# will not negatively effect protocols that don’t have a built-in retransmit like TCP.
# * Slowing down incoming TCP connections such that the total inbound rate is less
# than the true capability of the device (ADSL/Cable Modem) SHOULD result in little
# to no packets being queued on the ISP’s side (DSLAM, cable concentrator, etc). Since
# these ISP queues have been observed to queue 4 seconds of data at 1500Kbps or 6 megabits
# of data, having no packets queued there will mean lower latency.
#
# Caveats (questions posed before testing):
# * Will limiting inbound traffic in this fashion result in poor bulk TCP performance?
# – Preliminary answer is no! Seems that by prioritizing ACK packets (small /dev/null
iptables -t mangle -L MYSHAPER-IN -v -x 2> /dev/null
exit
fiif [ “$1” = “qdisc” ]
then
echo “[qdisc]”
tc -s qdisc show dev $DEV
exit
fiif [ “$1” = “class” ]
then
echo “[class]”
tc -s class show dev $DEV
exit
fiif [ “$1” = “iptables” ]
then
echo “[iptables]”
iptables -t mangle -L MYSHAPER-OUT -v -x 2> /dev/null
iptables -t mangle -L MYSHAPER-IN -v -x 2> /dev/null
exit
fi# Reset everything to a known state (cleared)
tc qdisc del dev $DEV root 2> /dev/null > /dev/null
tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null
iptables -t mangle -D POSTROUTING -o $DEV -j MYSHAPER-OUT 2> /dev/null > /dev/null
iptables -t mangle -F MYSHAPER-OUT 2> /dev/null > /dev/null
iptables -t mangle -X MYSHAPER-OUT 2> /dev/null > /dev/null
iptables -t mangle -D PREROUTING -i $DEV -j MYSHAPER-IN 2> /dev/null > /dev/null
iptables -t mangle -F MYSHAPER-IN 2> /dev/null > /dev/null
iptables -t mangle -X MYSHAPER-IN 2> /dev/null > /dev/null
ip link set imq0 down 2> /dev/null > /dev/null
rmmod imq 2> /dev/null > /dev/nullif [ “$1” = “stop” ]
then
echo “Shaping removed on $DEV.”
exit
fi###########################################################
#
# Outbound Shaping (limits total bandwidth to RATEUP)# set queue size to give latency of about 2 seconds on low-prio packets
ip link set dev $DEV qlen 30# changes mtu on the outbound device. Lowering the mtu will result
# in lower latency but will also cause slightly lower throughput due
# to IP and TCP protocol overhead.
ip link set dev $DEV mtu 1500# Find device MTU
DEVMTU=`ifconfig $DEV | grep MTU | cut -d ‘:’ -f 2- | sed ‘s/ .*//’`# add HTB root qdisc
tc qdisc add dev $DEV root handle 1: htb default 26# add main rate limit classes
tc class add dev $DEV parent 1: classid 1:1 htb rate ${RATEUP}kbit quantum $[$DEVMTU*9]# add leaf classes – We grant each class at LEAST it’s “fair share” of bandwidth.
# this way no class will ever be starved by another class. Each
# class is also permitted to consume all of the available bandwidth
# if no other classes are in use.
tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 0 quantum $[$DEVMTU*9]
tc class add dev $DEV parent 1:1 classid 1:21 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 1 quantum $[$DEVMTU*6]
tc class add dev $DEV parent 1:1 classid 1:22 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 2 quantum $[$DEVMTU*4]
tc class add dev $DEV parent 1:1 classid 1:23 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 3 quantum $[$DEVMTU*3]
tc class add dev $DEV parent 1:1 classid 1:24 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 4 quantum $[$DEVMTU*2]
tc class add dev $DEV parent 1:1 classid 1:25 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 5 quantum $[$DEVMTU*1]
tc class add dev $DEV parent 1:1 classid 1:26 htb rate $[$RATEUP/7]kbit ceil ${RATEUP}kbit prio 6 quantum $[$DEVMTU*1]# attach qdisc to leaf classes – here we at SFQ to each priority class. SFQ insures that
# within each class connections will be treated (almost) fairly.
tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $DEV parent 1:21 handle 21: sfq perturb 10
tc qdisc add dev $DEV parent 1:22 handle 22: sfq perturb 10
tc qdisc add dev $DEV parent 1:23 handle 23: sfq perturb 10
tc qdisc add dev $DEV parent 1:24 handle 24: sfq perturb 10
tc qdisc add dev $DEV parent 1:25 handle 25: sfq perturb 10
tc qdisc add dev $DEV parent 1:26 handle 26: sfq perturb 10# filter traffic into classes by fwmark – here we direct traffic into priority class according to
# the fwmark set on the packet (we set fwmark with iptables
# later). Note that above we’ve set the default priority
# class to 1:26 so unmarked packets (or packets marked with
# unfamiliar IDs) will be defaulted to the lowest priority
# class.
tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20
tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 21 fw flowid 1:21
tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 22 fw flowid 1:22
tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 23 fw flowid 1:23
tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 24 fw flowid 1:24
tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 25 fw flowid 1:25
tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 26 fw flowid 1:26# add MYSHAPER-OUT chain to the mangle table in iptables – this sets up the table we’ll use
# to filter and mark packets.
iptables -t mangle -N MYSHAPER-OUT
iptables -t mangle -I POSTROUTING -o $DEV -j MYSHAPER-OUT# add fwmark entries to classify different types of traffic – Set fwmark from 20-26 according to
# desired class. 20 is highest prio.
iptables -t mangle -A MYSHAPER-OUT -p tcp –sport 0:1024 -j MARK –set-mark 23 # Default for low port traffic
iptables -t mangle -A MYSHAPER-OUT -p tcp –dport 0:1024 -j MARK –set-mark 23 # “”iptables -t mangle -A MYSHAPER-OUT -p tcp –dport 20 -j MARK –set-mark 26 # ftp-data port, low prio
iptables -t mangle -A MYSHAPER-OUT -p tcp –dport 5190 -j MARK –set-mark 23 # aol instant messenger
iptables -t mangle -A MYSHAPER-OUT -p icmp -j MARK –set-mark 20 # ICMP (ping) – high prio, impress friends
iptables -t mangle -A MYSHAPER-OUT -p udp -j MARK –set-mark 21 # DNS name resolution (small packets)
iptables -t mangle -A MYSHAPER-OUT -p tcp –sport ssh -j MARK –set-mark 22 # secure shell
iptables -t mangle -A MYSHAPER-OUT -p tcp –dport ssh -j MARK –set-mark 22 # secure shelliptables -t mangle -A MYSHAPER-OUT -p tcp –dport telnet -j MARK –set-mark 22 # telnet (ew…)
iptables -t mangle -A MYSHAPER-OUT -p tcp –sport telnet -j MARK –set-mark 22 # telnet (ew…)iptables -t mangle -A MYSHAPER-OUT -p tcp –sport http -j MARK –set-mark 25 # Local web server
#iptables -t mangle -A MYSHAPER-OUT -p tcp –dport 80 -j MARK –set-mark 24 # web
#iptables -t mangle -A MYSHAPER-OUT -p tcp –dport 443 -j MARK –set-mark 24 # httpsiptables -t mangle -A MYSHAPER-OUT -p tcp -m length –length :64 -j MARK –set-mark 21 # small packets (probably just ACKs)
iptables -t mangle -A MYSHAPER-OUT -m mark –mark 0 -j MARK –set-mark 26 # redundant- mark any unmarked packets as 26 (low prio)# Done with outbound shaping
#
####################################################echo “Outbound shaping added to $DEV. Rate: ${RATEUP}Kbit/sec.”
# uncomment following line if you only want upstream shaping.
exit####################################################
#
# Inbound Shaping (limits total bandwidth to RATEDN)# make sure imq module is loaded
tc qdisc add dev $DEV handle ffff: ingress
# filter *everything* to it (0.0.0.0/0), drop everything that’s
# coming in too fast:tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src
0.0.0.0/0 police rate ${RATEDN}kbit burst 10k drop flowid :1echo “Inbound shaping added to $DEV. Rate: ${RATEDN}Kbit/sec.”
——————————————————————————————Nisam sejpovao downlink posto mi nije neophodno, a ionako mora opasno da se osakati propusna moc.
13. jun 2006. u 8:13 am #44883MisterNoUčesnikHvala LYB. Ispobacu je ovih dana.
13. jun 2006. u 12:48 pm #44884popeyeGlavni majstorMIslim da je brže i čitljivije koristiti tcng za podešavanje. Evo
nešto slično poput skripte gore, ali se tačno izdvaja ping i DNS saobraćaj, umesto puštanja ICMP i UDP-a generalno.[code]dev “eth0” {
egress {class( )
if ip_len > 4) & 0xff
if icmp_type == 0 || icmp_type == 8
;
class( )
if tcp_sport == PORT_IMAP || tcp_sport == PORT_IMAPS
if tcp_sport == PORT_SSH && ip_tos_delay == 1
if ip_len < 256 && tcp_dport == PORT_SSH
if tcp_dport == PORT_DOMAIN || udp_dport == PORT_DOMAIN
if ip_len < 512 && tcp_dport == PORT_HTTP
;
class ( ) if 1 ;htb () {
class ( rate 240kbps, ceil 240kbps ) {
$veza = class ( rate 64kbps, ceil 240kbps ) { sfq; } ;
$rad = class ( rate 160kbps, ceil 240kbps ) { sfq; } ;
$ostalo = class ( rate 16kbps, ceil 240kbps ) { sfq; } ;
}
}
}
}[/code]Pisano je napamet, možda ima grešaka pa copy/paste neće raditi, ali verujem da ćeš se snaći.:) Ovo je egrass (dolazni saobraćaj) primer, za odlazni koristiš ingress, prilično je jednostavno.
13. jun 2006. u 9:39 pm #44885LYbUčesnikDa, mada moze i gore da koristi portove kada je DNS u pitanju. Bas danas je dotegnuta skripta kada je u pitanju torrent tcp i udp saobracaj – da ga tura u lowest prio.
13. jun 2006. u 9:44 pm #44886popeyeGlavni majstorNaravno, pa u oba slučaja je HTB algoritam za QoS. Preko tcng je samo jednostavnije i čitljivije (lakše za održavanje).
Nego, kao što rekoh, ovo je kao zaobići jer nije pravo L7 filtriranje. Sad, ovo je možda pravi trenutak da neko kao MisterNo isproba zakrpe za jezgro i netfilter koji to omogućavaju, pa da podeli utiske sa nama ostalima. 😉
14. jun 2006. u 12:53 pm #44887MisterNoUčesnikPosto sam trenutno u nekim silnim projektima, a rokovi su u pitanju stigao sam juce samo da strartujem skriptu koju je stavio Lyb. Uspesno se startovala bez greske, ali to je sve sto sam stigao da probam.
Inace svecano obecavam kada izadjem iz guzve da cu biti kunic i da cu malo ozbiljnije pogledati stvari oko shapinga. Onda imate “Linux traffic shaping by MisterNo” eksluzivno na linuxo.org.
15. jun 2006. u 12:23 am #44888LYbUčesnikE do mojega… pisao odgovor i raspisao se i ovaj mi javi da mi je sesija istekla…
BTW, skripta radi vise od pola godine kod mene, latency je neuporedivo bolji nego bez nje. Na primer, ping sa punim uploadom:
— http://www.sezampro.yu ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 8999ms
rtt min/avg/max/mdev = 49.658/135.482/244.936/61.908 mssa punim DL i UL:
10 packets transmitted, 10 received, 0% packet loss, time 8999ms
rtt min/avg/max/mdev = 1351.801/1535.439/1724.214/98.980 ms, pipe 2sto je mnogo bolje nego bez nje, kada avg ume da ide i preko 8000. Moze i bolje, ali mora da se dodatno osakati link, treba naci ravntezu. Uz to, omogcava ti da ti p2p recimo ne zeza link bez veze i ne koci telnet/ssh ili bilo kakvu drugu interaktivnu konekciju.
Naravno, nadam se unapredjenjima (gledad u iscekivanju u MisterNo-a 🙂 )
-
AutorČlanci
Moraš biti prijavljen da bi postavio komentar u ovoj temi.