Skip to main content

routing - Getting Squid and TPROXY with IPv6 working on CentOS 7

I'm having trouble getting TPROXY working with Squid and
IPv6 on a CentOS 7 server. I was previously using a generic intercept setup with NAT,
but it was limited to IPv4 only. I'm now expanding the setup to include IPv6 with
TPROXY.



I've been using the official Squid wiki
article on the subject to configure
everything:



href="http://wiki.squid-cache.org/Features/Tproxy4"
rel="noreferrer">http://wiki.squid-cache.org/Features/Tproxy4




Thus
far the TPROXY config appears to be working for IPv4 with no issues. With IPv6 however
connections are timing out and not working properly. I'll break down the setup for
better understanding.



Note all firewall and
routing rules are exactly the same for IPv4, the only difference is
inet6 and ip6tables for configuring
IPv6 based rules in the examples
below.




  • OS and Kernel:
    CentOS 7 (3.10.0-229.14.1.el7.x86_64)

  • All packages are up
    to date according to yum

  • Squid Version: 3.3.8 (Also tried
    3.5.9)

  • Firewall: iptables/ip6tables
    1.4.21


  • libcap-2.22-8.el7.x86_64



IPv6
connectivity is currently through a 6in4 tunnel via Hurricane Electric, this is
configured on the DD-WRT router and then the assigned prefix delegated to clients via
radvd. The Squid box has several static IPv6 addresses
configured.



The Squid box sits within the main
LAN which it is serving. Clients that are having traffic on port 80 intercepted (mainly
wireless clients) are being pushed to the Squid box via my DD-WRT router with the
following firewall and routing rules, adapted from the Policy Routing wiki article and
DD-WRT wiki





This appears
to be working OK in terms of passing the traffic to the Squid box. One additional rule I
had to add on the DD-WRT router in addition to the above was an exception rule for the
configured outgoing IPv4 and IPv6 addresses on the Squid box, otherwise I get a crazy
loop issue and traffic gets broken for all clients including the main LAN that uses
Squid on
3128.




ip6tables
-t mangle -I PREROUTING -p tcp --dport 80 -s "$OUTGOING_PROXY_IPV6" -j
ACCEPT


On the Squid
box I am then using the following routing rules and the DIVERT chain to handle the
traffic accordingly. I needed to add additional rules to prevent any errors with the
chain already existing during testing. My firewall is CSF, I
have added the following to
csfpre.sh



ip
-f inet6 route flush table 100
ip -f inet6 rule del fwmark 1 lookup
100

ip -f inet6 rule add fwmark 1 lookup
100

ip -f inet6 route add local default dev eno1 table
100

ip6tables -t mangle -F
ip6tables -t mangle
-X
ip6tables -t mangle -N DIVERT

ip6tables -t mangle -A
DIVERT -j MARK --set-mark 1
ip6tables -t mangle -A DIVERT -j
ACCEPT
ip6tables -t mangle -A PREROUTING -p tcp -m socket -j
DIVERT
ip6tables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port
3129



squid.conf
is configured for two
ports:



http_proxy
3128
http_proxy 3129
tproxy


In addition I
am also using Privoxy and had to add no-tproxy to my cache_peer
line, otherwise all traffic was unable to be forwarded for both
protocols.




cache_peer
localhost parent 8118 7 no-tproxy no-query
no-digest


I am not
using any tcp_outgoing_address directives because of Privoxy,
instead I am controlling the outbound addresses through CentOS and the bind
order.



sysctl
values:



net.ipv4.ip_forward
= 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter
= 0

net.ipv4.conf.eno1.rp_filter =
0


I am not sure if the
rp_filter modifications are needed as the setup works on IPv4
with or without them and produces the same result for
IPv6.



SELINUX



SELINUX
is enabled on the Squid box, but policies have been configured to allow the TPROXY
setup, so its not being blocked (IPv4 working shows this anyway). I have checked with
grep squid /var/log/audit/audit.log | audit2allow -a and get
matches>



#=============
squid_t ==============


#!!!! This avc is allowed in the
current policy
allow squid_t self:capability
net_admin;

#!!!! This avc is allowed in the current
policy
allow squid_t self:capability2
block_suspend;

#!!!! This avc is allowed in the current
policy
allow squid_t unreserved_port_t:tcp_socket
name_connect;



I
have also set the following
booleans:



setsebool
squid_connect_any 1
setsebool squid_use_tproxy
1


Broken IPv6
connectivity



Ultimately, IPv6 connectivity is
completely broken for TPROXY clients (LAN clients on port 3128
which use a WPAD/PAC file have fully working IPv6). While it appears the traffic is
being routed to the Squid box in some way, no requests over IPv6 via TPROXY are
appearing in the access.log. All IPv6 requests both literal
IPv6 and DNS, timeout. I can access internal IPv6 clients but again, this traffic isn't
logged either.




I did some testing
using test-ipv6.com and found that it detected my outgoing Squid IPv6 address but the
IPv6 tests either showed as bad/slow or timeout. I temporarily enabled the via header
and found the Squid HTTP header was visible, so the traffic is at least getting to the
Squid box but not being routed properly once its
there.



I've been trying to get this to work for
some time and cannot find what the problem is, I've even asked on the Squid mailing
list, but have been unable to diagnose the actual issue or solve it. Based on my
testing, I'm pretty sure its one of the following areas and the Squid box the
problem:




  • Routing

  • Kernel

  • Firewall




Any
ideas and additional steps that I can take to get TPROXY and IPv6 working would be
greatly appreciated!



Additional
information



ip6tables
rules:



Chain
PREROUTING (policy ACCEPT)
target prot opt source destination
DIVERT
tcp ::/0 ::/0 socket

TPROXY tcp ::/0 ::/0 tcp dpt:80 TPROXY
redirect :::3129 mark 0x1/0x1

Chain INPUT (policy
ACCEPT)
target prot opt source destination

Chain FORWARD
(policy ACCEPT)
target prot opt source destination

Chain
OUTPUT (policy ACCEPT)
target prot opt source
destination


Chain POSTROUTING (policy
ACCEPT)
target prot opt source destination

Chain DIVERT
(1 references)
target prot opt source destination
MARK all ::/0 ::/0
MARK set 0x1
ACCEPT all ::/0
::/0



IPv6
routing table (prefix
obscured)



unreachable
::/96 dev lo metric 1024 error -101
unreachable ::ffff:0.0.0.0/96 dev lo
metric 1024 error -101
2001:470:xxxx:xxx::5 dev eno1 metric 0
cache
mtu 1480
2001:470:xxxx:xxx:b451:9577:fb7d:6f2d dev eno1 metric 0

cache
2001:470:xxxx:xxx::/64 dev eno1 proto kernel metric
256
unreachable 2002:a00::/24 dev lo metric 1024 error
-101

unreachable 2002:7f00::/24 dev lo metric 1024 error
-101
unreachable 2002:a9fe::/32 dev lo metric 1024 error
-101
unreachable 2002:ac10::/28 dev lo metric 1024 error
-101
unreachable 2002:c0a8::/32 dev lo metric 1024 error
-101
unreachable 2002:e000::/19 dev lo metric 1024 error
-101
unreachable 3ffe:ffff::/32 dev lo metric 1024 error
-101
fe80::/64 dev eno1 proto kernel metric 256
default via
2001:470:xxxx:xxxx::1 dev eno1 metric
1

Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able