Skip to main content

Posts

Showing posts from August, 2014

linux - Issue getting httpd fullstatus to work on webserver on Apache 2.4

I have a webserver, and I am migrating it over to RHEL7 (httpd 2.2 -> 2.4). In our old server (2.2), we could do a httpd fullstatus, and it would show a lot of good information about apache on the server. On the apache 2.4 server, I am having issues getting the fullstatus command to work. Here is a list of steps that I took (compiled from various guides around the internet) - 1.) - Checked if status_module is enabled. httpd -M | grep status returns status_module (shared) . 2.) - Added the module into the httpd.conf - SetHandler server-status Require host Order deny,allow Deny from all Allow from 127.0.0.1 3.) - Created a conf file inside of conf.modules.d/ (with the same words as above) - SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 I reloaded / restarted apache after doing all of those things, but I still has no luck in getting httpd fullstatus to work. NOTE - I did both of these ste

iis 7 - Intranet/Internet DNS name conflict on windows server

I have an intranet network on which a Windows IIS7 server is named mycompany.com . We also have a website hosted elsewhere named mycompagny.com . From the outsite of the network, everything is fine for our current use case. But from the inside, it's not possible to access the website in a browser without the www prefix. It return a dummy page from IIS7. A hack used by a colleague is to skip local DNS routing by using google's DNS service . A major con for the solution is that it have to be configured locally on all machines and that it disable local http serving. Another hack would be to always use www, but we have some subdomains that are not configured to work with it. For example, our famous: nice-app.mycompagny.com . I can't just change the intranet server name because it's already used for other purpose as ssh-access-ing a bunch of machines ftp-serving. Renaming mycompagny.com by mycompagny-intranet.com or something else would certainly breaks a lot of things an

linux - nginx redirect http to https not working

I am trying to reroute from http to https. For example you visit example.com and you will be automatically redirected to https://example.com . I tryed using this one: server { listen 80; return 301 https://$host$request_uri; } as well as this one: server { listen 80; server_name example.com; return 301 https://$server_name$request_uri; } as found here : In Nginx, how can I rewrite all http requests to https while maintaining sub-domain? But neither of seem are wokring for me. I am staying on example.com. Anyone got an idea?

apache 2.2 - Memory Ram too high in HTTPD processes

I management two dedicated servers. I use Centos 6 with Plesk Panel. Update: I use Apache with mod_php. On the first server I have a site in Wordpress. I have noticed that my httpd processes take up more memory each time. Besides the 'top' command, I use the following command to find out: ps -ylC httpd --sort:rss Image: Pay attention to the RSS column. The httpd processes from 13MB to 127MB occupy in ram. Installed apache modules are: # Httpd l Compiled in modules:    core.c    prefork.c    http_core.c    mod_so.c On the second server I have a website PHPBB. In this case the involved processes like httpd. All occupy about 85MB. Image: I've read that should occupy 20MB. How I can profiling or optimize this? With what tool? Xhprof I tried, but it says that the memory used is less than that then use really ... My memory are intensive in my servers. This is a big problem. Update: Server 1 Meminfo output: #cat /proc/meminfo MemTotal: 5969120 kB MemFree: 625720 k

proxy - Redirect scp or ftp traffic by hostname

I have 3 servers (server1, server2 and server3) with Apache and SCP (and/or FTP server). These servers are behind a router. I have another server with a public IP used as a proxy for Apache. So, if I'm trying to access website1 (hosted on server1) my requests are proxied to server1. The same for website2 and website3.... Now, I would like to do the same thing for FTP or SCP. Is this possible? e.g.: If I install an FTP server on server1, server2 and server3, can I proxy my requests based on the hostname? ftp.website1 to the ftp server on server1 and so on.... I can also use SCP, there is no difference, and I have full access to proxy and 3 servers.

Install openssl-dev on Ubuntu server

In order to compile NGinx in need to install openssl and openssl-dev (I'am following a book guide). So i'am doing this : sudo apt-get install openssl openssl-dev But i get an error telling me that it's impossible to find openssl-dev . Also after some googling, it seems that libssl-dev is equal to openssl-dev , is that true ? ( apt-get found libssl-dev on my server) Here is my server version : 2.6.32-22-server Any help welcome ! Answer If the likelihood that the dependencies for the version of a package that is in the release of Ubuntu (or other Debian derived arrangements) is the same as the deps for the version you are trying to build, you could run apt-get build-dep nginx or aptitude build-dep nginx - this will not install the nginx package but will instead install all those listed as dependencies (and their dependencies, as usual) which includes libssl-dev (the package that you are currently looking for). In most cases this will allow the build of

cisco - Having trouble identifying where to create subnets and VLSM

You can download the 200kb PDF file to see the questions I'm being asked. I'm not a networking guy and our teacher has really failed to teach the basics and just thrust us into this thing. I have no choice but to brute force my way through this filler class. How exactly can I decide where to subnet? For example, in my PDF does every connection between a router and a switch have to be a Subnet? Where do I apply VLSM? Does anyone have a tutorial step by step on create a VLSM network between a couple of routers, switches and hosts. In the PDF they show 8000h, that means I need to accomodate 8000 hosts right? I don't know how to do that. If the question says the IP address in the beginning is 191.20.0, how can I accomodate 8000 hosts? Edit: My main question is how to accomodate 8000h like in the PDF Answer I am only doing this because I believe you. The kids I am going to school with are lost, and we are using the cisco netacad curriculum. The answer is divide

domain name system - Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over?

Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here . When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-ov

I am still running Ubuntu 13.04, how should I react to the Heartbleed Bug?

I know that 13.04 is affected (or at least my installation is) because of the OpenSSL version currently installed. However, after running sudo apt-get update sudo apt-get upgrade I checked my OpenSSL version and it was still an unpatched build. I also checked http://www.ubuntu.com/usn/usn-2165-1/ and 13.04 isn't listed. What can I do to patch OpenSSL on my machine? Answer Note that 13.04 is no longer supported. Upgrading to a supported version is the recommended action. But if a short term solution is needed, it's possible to rebuild the packages from source ( sample instructions ) with a patch applied, e.g.: sudo apt-get install build-essential fakeroot dpkg-dev devscripts apt-get source openssl sudo apt-get build-dep openssl cd openssl dch -i # ...apply patch... dpkg-buildpackage -rfakeroot -uc -b cd .. sudo dpkg -i *.deb From the Ubuntu changelog page for openssl , find the diff file for quantal, which happens to have the same base version of openssl (

linux - Can I make an ext3 filesystem recognize (and use) the entire partition?

Using gparted and partimage from SysRescCD I recently made a backup image of the partition containing my Ubuntu installation, deleted all partitions except for the original Windows partitions reduced the size of the Win7 partition created an extended partition using all unallocated space within the extended partition, created an ext3 partition and a swap partition restored the backup image to the ext3 partition After these operations the ext3 partition is larger than when I started, but the filesystem is still reporting the old size: $ fdisk -l Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1549f232 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /d

domain name system - Why is Denic not accepting my nameservers?

I'm currently in the process of moving all of our domains to our own nameservers. Which wasn't an issue until I hit our own .de domain. I (think I) understand the implications of having the NS inside it's own domain, hence the need for glue records. Until yesterday, I would have assumed I have a pretty good understanding of Bind and DNS zones until I was presented with this error from the Denic nameserver predelegation check : Inconsistent set of nameserver IP addresses (NS, provided glues, determined glues) ns2.hartwig-at.de [88.198.242.190/88.198.242.190] Default resolver determined: [], other resolvers determined: {88.198.242.190/88.198.242.190=[/2a01:4f8:d13:3c85:0:0:0:2, /88.198.242.190]} Inconsistent set of nameserver IP addresses (NS, provided glues, determined glues) ns1.hartwig-at.de [cloud.hartwig-at.de/176.221.46.23] Default resolver determined: [], other resolvers determined: {cloud.hartwig-at.de/176.221.46.23=[/2a00:1158:3:0:0:0:0:

windows - SCCM Client Image deployment options

At my organization, we are using SCCM to manage OS deployments. Right now, it's rather complicated to get an image out to a client and I'm looking for an easier way. Here is what we have to do right now to get this to work: We first have to gather the computer name and MAC adress so we can properly target the machine. All of the computers we want to reimage are added to a collection that has one required task. This task reboots the drive into a PXE environment, wipes the hard drive, and drops the OS onto the drive. The problem with this is that when you add a computer to the collection, the process doesn't run until the next scheduled task check. This can take up to 30 minutes depending on when you add it to the collection. On top of that, there is no way to set a timer so techs have to wait until off hours to drop any images. What's worse is that we don't have a way to manually kick off the image process. If we have a clean harddrive, we have a choice on how to pro

linux - RAID setup for high speed reading

We are looking to build a realtime playback machine using Linux and a RAID (5 or 10) setup. The current setup looks like: 12GB memory 5 x 7200rpm drive (software raid) centOS 6 (Kernel Linux 2.6.32-71.29.1.el6.x86_64) NVidia Quadro 5000 (driver 280.13) Intel(R) Xeon(R) CPU X5650 @ 2.67GHz I did run Bonnie++ and iozone to do some benchmark with different raid setup (5 and 10), with different fs type (ext4 and xfs), and different stripe size. Unfortunately it seems that I can't get the speed I want out of it (always <200MB/s). The other test I made was directly in the playback software (RV - http://www.tweaksoftware.com/products/rv ), but could not get it to play faster than 20 frame per second (looking for 24 fps) with more than 3 sequences. These playback details are a little futile, I just want to know what would be the best setup to get something like ~700MB/s read performance? Is it possible? I've been reading quite a bit, seems like a hardware controller could be better

domain name system - Reverse DNS not automatically updating on Windows DNS server

We use windows server 2003 for DNS on our network. The forward DNS entries ("A" records) for windows machines on the domain are populated automatically. However, the reverse DNS entries ("PTR" Records) are not. The reverse lookup zone exists, and I can add entries to it manually, but it doesn't automatically populate. Dynamic updates are enabled for both the forward and reverse zones. What am I doing wrong?

raidz - ZFS mirror or RAID-Z for expandability

I'm building out a FreeNAS based server that will primarily be used to serve up iSCSI disks to Xen virtual machines and dedicated servers. My current data need is for 8TB of space, growing at about 2TB per year. I have a Supermicro X6DHE-XB 3U enclosure with 4G of RAM, 16 SATA hot-swap bays that I will be using. It comes with 2x8 port 3Ware RAID cards, but I'm planning on just using the ZFS capabilities instead of the hardware RAID. My initial drive set will be 8x2TB HITACHI Deskstar 7K3000 HDS723020BLA642 drives. I have 2 questions I will want to add additional drives to this server in the future and have them added to the storage pool. Ideally, I'd like to do this live without having to reboot or take the system off line. Are there any limitations or advantages to a ZFS mirror setup vs a RAID-Z setup for expanding the storage pool? With the hot-swap SATA ports, can I add a disk and have it show up in the FreeNAS GUI as available? If the performance penalty is less t

Software vs hardware RAID performance and cache usage

I've been reading a lot on RAID controllers/setups and one thing that comes up a lot is how hardware controllers without cache offer the same performance as software RAID. Is this really the case? I always thought that hardware RAID cards would offer better performance even without cache. I mean, you have dedicated hardware to perform the tasks. If that is the case what is the benefit of getting a RAID card that has no cache, something like a LSI 9341-4i that isn't exactly cheap. Also if a performance gain is only possible with cache, is there a cache configuration that writes to disk right away but keeps data in cache for reading operations making a BBU not a priority? Answer In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice. Long answer: when computing power was limited, hardware RAID cards

linux - Spectre/Meltdown - update microcode

I am trying to manually update the microcode for the Intel i5-2410M. Dell XPS 15z 2011 - Intel i5-2410M (Sandy bridge). Ubuntu 18.04 (Debian) | Gnome | Grub2 | Systemd I have installed some pre-packaged microcode from the Ubuntu repository, but I don't know if any of it applies to me: apt install intel-microcode dmsg | grep microcode [ 0.000000] microcode: microcode updated early to revision 0x2d, date = 2018-02-07 [ 1.259590] microcode: sig=0x206a7, pf=0x10, revision=0x2d [ 1.259643] microcode: Microcode Update Driver: v2.2. Note that the date is February 7, 2018. Intel has a later release for the i5-2410M, April 25th 2018. https://downloadcenter.intel.com/download/27776/Linux-Processor-Microcode-Data-File?product=52224 CVE-2018-3640 [rogue system register read] aka 'Variant 3a' CPU microcode mitigates the vulnerability: NO STATUS: VULNERABLE (an up-to-date CPU microcode is needed to mitigate this vulnerability) How to fix: The microcode of

windows - problems installing mysql and phpmyadmin to localhost

I know there have been many similar questions, but as far as I can tell, most of the other people have gotten further than I have... I'm trying to get a WAMP setup happening. I've got PHP and Apache running and talking to each other. PHP is in c:\PHP Apache is in it's default program files folder. mySQL is in it's default install location. I have localhost setup at D:\public_html\ I'm able to navigate to localhost and see html and php files. But I have a simple mySQL test file: // hostname or ip of server (for local testing, localhost should work) $dbServer='localhost'; // username and password to log onto db server $dbUser='root'; $dbPass=''; // name of database $dbName='test'; $link = mysql_connect("$dbServer", "$dbUser", "$dbPass") or die("Could not connect"); print "Connected successfully "; mysql_select_db("$dbName") or die("Could not select database&

apache 2.4 - Apache2 after graceful restart needs 100% cpu

OS: Debian 8.2, Apache: Apache/2.4.10 (Debian) Since two days my apache2 "suddenly" starts to run on 100% cpu at night. The process does not stop then and I have run kill -9 to stop it in the morning. service apache2 stop does not work to stop the instance. I guess it all started when I installed kolab (kolab.org) on my machine. I also installed owncloud before and did some smaller installs. Otherwise the Debian install is pretty much "fresh". But it started the night after installing kolab. The problem seems to be triggered by a graceful restart at 6 in the morning. At least the logs hint into this direction. And if I do manually: apachectl -k graceful I get exactly this apache2 process running on 100% cpu. service apache2 restart does NOT trigger this problem! I have no clue how to proceed further to find the problem.

How to disable all bounce back email in exim 4.69

I have set up an email server to send out solicited newsletters. There should be no "regular" users of this server, so it is not desirable to send bounce notifications back to the recipient. Especially so since I am tracking bounces myself by parsing the log files periodically. What I want is to unconditionally prevent exim from ever sending a bounce notification email back to a sender. How can I do this? Thank you! (I accidentally posted this to superuser before posting it here, disregard that if you come across) What I want is an email server that will accept all incoming emails, deliver it accordingly (that is remotely or locally) and not send a bounce notification the sender upon bounce. I log bounces myself, in a database. The only function bounce messages have in my setting is to waste resources and bandwidth. I need to send emails fast, using exiwhat during a run, I see a significant number of deliveries to bounce@host.com. I could potentially increase my email prod

Disabled root login for SSH in Centos 5.9, key login no longer working

I have a Centos 5.9 server which I have previously configured to access via SSH key login and this has been working fine for many months. I recently had to have an issue resolved remotely which required me re-enabling the root login temporarily. After this was resolved I then disabled root login by setting "PermitRootLogin no" in the sshd_config file, however, I also set "Password authentication no" and I think this is where I've messed things up. After doing this I can no longer log in to the server, I just get the message: Permission denied (publickey,gssapi-with-mic). I've basically got no other way of accessing the server via SSH so I've come unstuck! I'm fairly certain it's because I've set the Password authentication to no that is the problem, I haven't changed any other settings on the server which should be affecting the keys that previously worked fine. How can I regain access to the server via SSH? Answer You need

email server - Best Practices for preventing you from looking like a spammer

I'd like to feel more confident setting up mail for my clients with regards to false positives. Here's what I know: SPF records are good, but not every spam filter service/software (SFSS) uses them. reverse DNS (PTR) records are pretty much a necessity. Open relays are bad. (Here's "other tips" I've read): the reverse lookup of the IP address of your mail server should resolve to the domain that you're sending mail out from. your server should say HELO FQDN.of.your.mail.server.com when speaking to other mail servers. the A host records in MX records should be (or resolve to to the IP address) your FQDN.of.your.mail.server.com Feel pretty good about 1 and 3. Here's where I'd like some clarification/suggestions: 2 and 4: I did alot of digging and this seems to be incorrect as most spam-filters are looking for a PTR in general and one that's not generically-assigned by the ISP; it doesn't appear that the domain you send mail out as

linux - Pros and cons of installing php 5.6 and mysql 5.6 on debian 7 wheezy

I was setting up a VPS server for my client and we had talked that I would install the last stable versions of the programs. Now, in Debian 7 wheezy by default php 5.4 and mysql 5.5 are available. I am aware of how to upgrade to php 5.6 and mysql 5.6, but the question is - as it is not available from debian's official packages is it recommended to upgrade? what are the pros and cons of doing so? are there any security issues that might appear from upgrading ? Thanks Answer The safety of upgrading depends on who is providing the upgraded patches. With the defaults on Debian stable, there is a security team that ensures all major security fixes get backported to those versions. If you install from a third party repo, you are trusting them, rather than the Debian Security Team to provide these patches. It is a possibility you will need to upgrade to a new version of PHP or MySQL (instead of the same version with backported security patches) to continue security support i

domain name system - Clarification of why DNS zone files require NS records

This question was originally asked here: Why do DNS zone files require NS records? To summarise: "When I go to my registrar and purchase example.com , I will tell my registrar that my nameservers are ns1.example.org and ns2.example.org". But please can somebody clarify the following: After registration, the .com registry will now have a record that tells a resolver needs to visit ns1.example.org or ns2.example.org in order to find out the IP address of example.com. The IP address resides in an A record in a zone file on ns1.example.org and has a identical copy on ns2.example.org. However, inside this file, there must also be 2 NS records which list ns1.example.org and ns2.example.org as the nameservers. But since we are already on one of these servers, this appears do be duplicated information. The answer originally given to the question said the nameservers listed in the zone file are "authoritative". If the nameservers didn't match, then the authoritative name

sas - Adding sata disk w/o controller to IBM X3400 M3

We have IBM X3400 M3 with SAS controller and and two disk with raid 1. We need extra space without protection. I try to add standart SATA disk to server. It has two SATA port on mainboard. When I connect SATA disk, it can't boot from SAS controller, try to boot from sata. I can't find any option in boot order for SAS controller. I try to adding SAS controller as boot device in bios but no solution. I try to another option: buying PCIE 2.0 1x sata controller (IO-PCE9215-4I). This card hasn't got rom, only working in operation system. Amber light on near PCIE slot. I try to disable rom boot option and legacy mode for PCIE slot but no solution. System auto restart while detecting booteable adapter step. I can't find detail about third party PCI Express sata controller or any PCI Express card working or not with IBM server. I will making plan to test with PCI sata controller. Is there any way to adding sata disk to this server without new IBM sas/sata controller?

Microsoft licensing and IT test servers

It's pretty common to hear people recommend using a "test server" for testing everything from backup/restore to software patches. Some Microsoft server products are fairly expensive. Are people really buying additional licenses for testing purposes? I can see buying extra hardware, but the cost of duplicate software could get pretty outrageous. I've looked in to MSDN and Technet but licenses obtained via these subscriptions don't seem to be appropriate for "IT" testing, only development testing and evaluation. I suppose one could always use trial products, but what a hassle...and perhaps that's even a violation of the trial agreement. So I'm not trying to start an ethical debate or anything. I'm wondering what the most cost effective approach would be. Is there an alternative to buying full retail licenses? (Specifically, products like Windows Server, Exchange Server, SQL Server, and Sharepoint.) Answer Licences

iptables - Forward differing hostnames to different internal IPs through NAT router

I have one public IP address, one router and multiple servers behind the router. I would like to forward differing domains (All using HTTP) through the router to different servers. For example: example1.com => 192.168.0.110 example2.com => 192.168.0.120 foo.example2.com => 192.168.0.130 bar.example2.com => 192.168.0.140 I understand that this could be accomplished using Port Forwarding, but I need all hosts running on port 80. I found some information about IP Masquerading, but I found this difficult to understand, and I am not sure if it is what I am after. Another solution I have found is to direct all traffic to Reverse Proxy server, which forwards the requests onto the appropriate server. What about iptables? I am using a Billion 7404 VNPX router. Is there a feature that this router has that can accomplish this? Are these my only options? Have I missed something completely? Is one recommended over the others? I have searched around but I don't think

Nginx - too many redirects

Sorry of the messy conf file. I have been trying everything, so it's become a mess. I want to enable SSL and redirect all traffic to https. Currently, with the below conf file, I can navigate to HTTP with no issues. HTTPS just redirects back to HTTP. If I uncomment line 5, then the browser simply says too many redirects. Any help is appreciated. UPDATE: Based on the comment below, I updated the conf file. Same issue. The minute I add return 301 I get too many redirects error. # Redirect all variations to https://www domain server { listen 80; server_name name.com www.name.com; # return 301 https://www.name.com$request_uri; } server { listen 443 ssl; server_name name.com; ssl_certificate /etc/letsencrypt/live/name.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/name.com/privkey.pem; # Set up preferred protocols and ciphers. TLS1.2 is required for HTTP/2 ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers ECDH+AESGCM:EC

hp - What should I take into consideration when choosing a RAID type?

There is now a canonical answer for this question over on here that outlines each and every type of RAID in use today, when you should and shouldn't use each one, and how to calculate usable RAW capacity of each RAID type. I administer the hardware that runs a website that receives 10,000 daily visitors. What variables should I consider when deciding which RAID type to choose for our web application? The server is a HP-DL380 G6 Server with 6 146GB (SCSI) HDDs. Answer I know that machine and those disks very well but you've not told us what the application does and how much space you need - this is important because you have a few choices, let's go through them. RAID 0 - You'd have 6 x 146GB (actually you only get about 95% of that 146GB, take this into consideration throughout) available to you application - this gets you the most space but is a bad idea as when one disk fails it kills your whole data - avoid this. RAID 1 - you could theoretic

postgresql - Remote Postgres Connection

I have followed everything I can find online about how to do this, and as far as I can tell everything is correct- but it's not working. I have a Linux 12.04 server running Postgres 9.1. I can SSH into the server and work with Postgres perfectly from there, connected via local connection. When I tried to set up Postgres for remote access however I cannot get it to work. I have made the following changes: (All listed files are in /etc/postgresql/9.1/main) In /environment, I've added PGOPTIONS='-i' In /pg_hba.conf, I've added host all all 0.0.0.0/0 md5 In /postgresql.conf, I've changed listen_addresses='*' I've checked the firewall, it was default config but opened postgres port just in case Netstat -a shows tcp 0 0 localhost:5432 *:* LISTEN I've made sure my postgres user password and name is correct, and can connect locally with that user I've tried restarting ( services postgresql restart ) and start/stop. And yet, still, I can't connect

Which RFCs should be cited as internet standards?

It's extremely common for RFCs to be cited in support of opinions (including Serverfault Q&A's), but the average IT employee has a very poor understanding in regards to which RFCs define standards and which ones are purely informative. This should be no surprise: system administrators of all experience levels typically avoid glazing their eyes at RFCs unless they have no choice but to. On a site like ours, it is extremely important that we don't perpetuate common misunderstandings in our upvoted answers. Random users cruising in from search engines are going to assume that upvotes with no disputing comments are sufficient indicators of vetting. Recently I stumbled across an answer from 2011 making it apparent that this is definitely not getting caught in some cases as we upvote and probably warrants some efforts to inform our community and the internet at large. So without further ado, how does one differentiate between a RFC that is quotable as an internet standard an

domain name system - Vyatta and DNS Rewrite (aka hairpin or doctoring)

I'm trying to make my public IP reachable also from inside LAN. I know that it's better to spilt DNS in order to have an internal zone that solves hosts with internal IP, but for a lot of reasons this is not applicable to my environment. I have a "simple" configuration, a server and few NAT ports: set nat destination rule 4002 description 'NAT inbound' set nat destination rule 4002 destination address 'x.y.z.k' set nat destination rule 4002 destination port '80,443,10050,10051,11051' set nat destination rule 4002 inbound-interface 'bond1' set nat destination rule 4002 protocol 'tcp' set nat destination rule 4002 translation address '10.0.0.190' set nat source rule 4002 description 'NAT outbound' set nat source rule 4002 outbound-interface 'bond1' set nat source rule 4002 source address '10.0.0.190' set nat source rule 4002 translation address 'x.y.z.k' When I try to access the public IP fr