Skip to main content

Posts

Showing posts from January, 2019

security - Scanning Vendors (non-PCI related) - Hype or Not?

itemprop="text"> For months now, when building websites for clients, I've come across virtually trillions of "Site Security Scanners" supposedly endorsed/approved by various shared web hosts who claim to run XSS, SQL injections, spam measures, and other checks, all for around $300 upwards if you want the plans that do more than one scan a year. I'm not talking about PCI compliance scans like those offered by Comodo, McAfee, Symantec and the like, which normally cost thousands per year, but rather these vendors seem to be playing off the fear of less technical business owners by offering an affordable alternative to the PCI fims. Although I could mention a few companies, I know of too many to list, so my main question is: Are these "affordable" scans a good alternative to a PCI scanner if you don't do eCommerce.

security - Scanning Vendors (non-PCI related) - Hype or Not?

For months now, when building websites for clients, I've come across virtually trillions of "Site Security Scanners" supposedly endorsed/approved by various shared web hosts who claim to run XSS, SQL injections, spam measures, and other checks, all for around $300 upwards if you want the plans that do more than one scan a year. I'm not talking about PCI compliance scans like those offered by Comodo, McAfee, Symantec and the like, which normally cost thousands per year, but rather these vendors seem to be playing off the fear of less technical business owners by offering an affordable alternative to the PCI fims. Although I could mention a few companies, I know of too many to list, so my main question is: Are these "affordable" scans a good alternative to a PCI scanner if you don't do eCommerce. What about if you're on a shared/managed plan -- shouldn't the host handle this? As for sites with eCommerce, since some merchants processors mandate the

linux - Bash: how to know if the last command's output ends with a newline or not?

itemprop="text"> Most of the time the output of a command ends with the newline character. But sometimes it does not, so the next shell prompt is printed in the same line together with the output. Example: root@hostname [~] # echo -n hello helloroot@hostname [~] # I've always found that very annoying. Now, I could just add a "\n" at the beginning of the the PS1 variable, but most of the time that will print one extra line I dont need. Is it possible to know whether the last command's output ended with a newline or not? /> Solution: />(Thanks to Dennis) PS1='$(printf "%$((`tput cols`-1))s\r")\u@\h [\w]\$ ' Answer I've been experimenting with the following to emulate the feature from zsh in Bash: $ unse

linux - Bash: how to know if the last command's output ends with a newline or not?

Most of the time the output of a command ends with the newline character. But sometimes it does not, so the next shell prompt is printed in the same line together with the output. Example: root@hostname [~] # echo -n hello helloroot@hostname [~] # I've always found that very annoying. Now, I could just add a "\n" at the beginning of the the PS1 variable, but most of the time that will print one extra line I dont need. Is it possible to know whether the last command's output ended with a newline or not? Solution: (Thanks to Dennis) PS1='$(printf "%$((`tput cols`-1))s\r")\u@\h [\w]\$ ' Answer I've been experimenting with the following to emulate the feature from zsh in Bash: $ unset PROMPT_SP; for ((i = 1; i <= $COLUMNS + 52; i++ )); do PROMPT_SP+=' '; done $ PS1='\[\e[7m%\e[m\]${PROMPT_SP: -$COLUMNS+1}\015$ ' It issues a reverse video percent sign, followed by a bunch of spaces to make it wrap to the next line,

filesystems - directory with 980MB meta data, millions of files, how to delete it? (ext3)

Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same

filesystems - directory with 980MB meta data, millions of files, how to delete it? (ext3)

Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long

security - Enabling cipher TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa) on Windows Server 2003+ISA 2006

itemprop="text"> I have been given a task to disable all "weak" ciphers/protocols on our very old ISA server based on Windows Server 2003. I have disabled all protocols but TLS1.0, and all ciphers but RC2/128, RC4/128 and Triple DES 168/168. But Qualys SSL Labs test utility does not display me that I have a 3DES encryption available on my ISA server. The only cipher suites listed are: TLS_RSA_WITH_RC4_128_MD5 (0x4) TLS_RSA_WITH_RC4_128_SHA (0x5) href="http://support.microsoft.com/kb/245030" rel="nofollow noreferrer">This KB says that when Triple DES 168 cipher is enabled, the TLS_RSA_WITH_3DES_EDE_CBC_SHA cipher suite is available. However, it is not. We need this cipher suite to allow a Windows 8.1 Phone connecting to ActiveSync published by this ISA. What could be the reason of 3DES e

security - Enabling cipher TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa) on Windows Server 2003+ISA 2006

I have been given a task to disable all "weak" ciphers/protocols on our very old ISA server based on Windows Server 2003. I have disabled all protocols but TLS1.0, and all ciphers but RC2/128, RC4/128 and Triple DES 168/168. But Qualys SSL Labs test utility does not display me that I have a 3DES encryption available on my ISA server. The only cipher suites listed are: TLS_RSA_WITH_RC4_128_MD5 (0x4) TLS_RSA_WITH_RC4_128_SHA (0x5) This KB says that when Triple DES 168 cipher is enabled, the TLS_RSA_WITH_3DES_EDE_CBC_SHA cipher suite is available. However, it is not. We need this cipher suite to allow a Windows 8.1 Phone connecting to ActiveSync published by this ISA. What could be the reason of 3DES encryption to be unavailable in this configuration, and what should we do in order to allow the connection for a Windows 8.1 phone without being vulnerable to POODLE? EDIT: There was apparently a server-side malfunction of some sort, a reboot fixed 3DES availability, although the

domain name system - How does one configure UFW to allow private DNS requests, but block DNS requests from internet

I have an Ubuntu Server 12.04, with two network cards: eth0 is connected to the internet eth1 is connected to a private network (192.168.10.1) The server is configured as a gateway and hosts DNS and DHCP fro the private network. Computers in the private network (say with IP address 192.168.10.50) can successfully connect to the internet. The UFW rules look as follows: Status: active To Action From -- ------ ---- 22 ALLOW Anywhere 80 ALLOW Anywhere 443 ALLOW Anywhere 67/udp on eth1 ALLOW 68/udp 53 ALLOW Anywhere 22 ALLOW Anywhere (v6) 80 ALLOW Anywhere (v6) 443 ALLOW Anywhere (v6) 67/udp on eth1 ALLOW 68/udp 53 ALLOW Anywhere (v6) Any internet user can query my DNS server. I'd like to block such requests as it poses a security risk. I reset the firewall, allowed access to ports 80, 443, 22 and

domain name system - How does one configure UFW to allow private DNS requests, but block DNS requests from internet

I have an Ubuntu Server 12.04, with two network cards: eth0 is connected to the internet eth1 is connected to a private network (192.168.10.1) The server is configured as a gateway and hosts DNS and DHCP fro the private network. Computers in the private network (say with IP address 192.168.10.50) can successfully connect to the internet. The UFW rules look as follows: Status: active To Action From -- ------ ---- 22 ALLOW Anywhere 80 ALLOW Anywhere 443 ALLOW Anywhere 67/udp on eth1 ALLOW 68/udp 53 ALLOW Anywhere 22 ALLOW Anywhere (v6) 80 ALLOW Anywhere (v6) 443 ALLOW Anywhere (v6) 67/udp on eth1 ALLOW 68/udp 53 ALLOW Anywhere (v6) Any internet user can query my

ping - Junior admin - how to discover/map the network to increase understanding?

itemprop="text"> I am a junior admin and have been tasked with gaining an understanding of the network. I know and use some of the servers on the network, so am able to tracert/ping them to see the names/addresses of equipment there are along the way, and gradually build a map, but how do I put the feelers out to find out what's out there if I don't know the names of server etc? Answer Any time I want to map an unfamiliar network, I start with what the routing protocols can tell me. And usually the routing protocols can tell me pretty much everything. After all, the routing protocols have to know what the network looks like—and it's almost never exactly the way it's documented (if it's documented at all). For an example of how this would go and to make things easy, let's say we're running OSPF. The gr

ping - Junior admin - how to discover/map the network to increase understanding?

I am a junior admin and have been tasked with gaining an understanding of the network. I know and use some of the servers on the network, so am able to tracert/ping them to see the names/addresses of equipment there are along the way, and gradually build a map, but how do I put the feelers out to find out what's out there if I don't know the names of server etc? Answer Any time I want to map an unfamiliar network, I start with what the routing protocols can tell me. And usually the routing protocols can tell me pretty much everything. After all, the routing protocols have to know what the network looks like—and it's almost never exactly the way it's documented (if it's documented at all). For an example of how this would go and to make things easy, let's say we're running OSPF. The great thing about OSPF (and link-state protocols generally) is that every router has already figured out the topology of the network. You just have to ask one of t

linux - Web Server Security Overkill?

I've been doing "extensive" research on securing a linux web server. On top of what is considered the "basics" (removing unused services, hardening ssh, iptables, etc.) is it wise to include anti-rootkits (Tripwire) and an anti-virus (ClamAV)? Are these just overkill for a web server? I know this is a very vague question, but I'm curious on others opinions. My future environment: - ubuntu 10.04 - fail2ban - nginx 0.8.x - php 5.3.x (suhosin, apc, memcached) - mongodb 1.6.x Possible applications: - web services - web apps with user uploads (pictures, pdfs, etc.) - typical websites (forms, etc.) If you have any other tips, please feel free to add! Thanks

linux - Web Server Security Overkill?

I've been doing "extensive" research on securing a linux web server. On top of what is considered the "basics" (removing unused services, hardening ssh, iptables, etc.) is it wise to include anti-rootkits (Tripwire) and an anti-virus (ClamAV)? Are these just overkill for a web server? I know this is a very vague question, but I'm curious on others opinions. My future environment: - ubuntu 10.04 - fail2ban - nginx 0.8.x - php 5.3.x (suhosin, apc, memcached) - mongodb 1.6.x Possible applications: - web services - web apps with user uploads (pictures, pdfs, etc.) - typical websites (forms, etc.) If you have any other tips, please feel free to add! Thanks

apache2 - Apache Over SSL - Request Entity Too Large , mod_security not installed

itemprop="text"> I'm having a problem with Request Entity Too Large when posting large data via AJAX to a Php program. I don't think mod_security is installed or enabled as there is no /etc/modsecurity folder and i can't find anything with the name when searching the server for files. The domain is running on a virtual host in ssl.conf and all works fine except this issue with posting large amounts of data. The strange thing is, i also have the same server setup on a local development virtual machine also running on SSL over a self signed certificate and the problem doesn't happen on the local machine; it's just the live dedicated server where the problem is. class="post-text" itemprop="text"> class="normal">Answer It may be many things, but you have

apache2 - Apache Over SSL - Request Entity Too Large , mod_security not installed

I'm having a problem with Request Entity Too Large when posting large data via AJAX to a Php program. I don't think mod_security is installed or enabled as there is no /etc/modsecurity folder and i can't find anything with the name when searching the server for files. The domain is running on a virtual host in ssl.conf and all works fine except this issue with posting large amounts of data. The strange thing is, i also have the same server setup on a local development virtual machine also running on SSL over a self signed certificate and the problem doesn't happen on the local machine; it's just the live dedicated server where the problem is. Answer It may be many things, but you have this LimitRequestBody directive in Apache which is defined as such: This directive specifies the number of bytes from 0 (meaning unlimited) to 2147483647 (2GB) that are allowed in a request body. See the note below for the limited applicability to proxy requests. T

redhat - Apache Config: RSA server certificate CommonName (CN) ... NOT match server name?

I'm getting this in error_log when I start Apache: [Tue Mar 09 14:57:02 2010] [notice] mod_python: Creating 4 session mutexes based on 300 max processes and 0 max threads. [Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `*.foo.com' does NOT match server name!? [Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `www.bar.com' does NOT match server name!? [Tue Mar 09 14:57:02 2010] [notice] Apache configured -- resuming normal operations Child processes then seem to seg fault: [Tue Mar 09 14:57:32 2010] [notice] child pid 3425 exit signal Segmentation fault (11) [Tue Mar 09 14:57:35 2010] [notice] child pid 3433 exit signal Segmentation fault (11) [Tue Mar 09 14:57:36 2010] [notice] child pid 3437 exit signal Segmentation fault (11) Server is RHEL, what's going

redhat - Apache Config: RSA server certificate CommonName (CN) ... NOT match server name?

I'm getting this in error_log when I start Apache: [Tue Mar 09 14:57:02 2010] [notice] mod_python: Creating 4 session mutexes based on 300 max processes and 0 max threads. [Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `*.foo.com' does NOT match server name!? [Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `www.bar.com' does NOT match server name!? [Tue Mar 09 14:57:02 2010] [notice] Apache configured -- resuming normal operations Child processes then seem to seg fault: [Tue Mar 09 14:57:32 2010] [notice] child pid 3425 exit signal Segmentation fault (11) [Tue Mar 09 14:57:35 2010] [notice] child pid 3433 exit signal Segmentation fault (11) [Tue Mar 09 14:57:36 2010] [notice] child pid 3437 exit signal Segmentation fault (11) Server is RHEL, what's going on and what do I need to do to fix this? EDIT As requested, the dump from httpd -M: Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (sta

security - How can I audit a Linux filesystem for files which have been changed or added within a specific timeframe?

itemprop="text"> We are a website design/hosting company running several sites and someone was able to write arbitrary data to the file system. We suspect that they still have some scripts installed and need a way to audit anything that has been changed or added in the last 10 days. Is there a command or script we can run to do this? itemprop="text"> class="normal">Answer Start Over: Personally, I would have trouble sleeping at night unless I just rebuilt each sever from a fresh install. I recommend strongly you do this, hackers can hide things, and make them look like they have changed even if they have if they are good enough. Why find won't work: For example, to change the modification time: kbrandt@kbrandt: ~/scrap/touch] ls -l foo -rw-rw-r-- 1 kbrandt kbrandt

security - How can I audit a Linux filesystem for files which have been changed or added within a specific timeframe?

We are a website design/hosting company running several sites and someone was able to write arbitrary data to the file system. We suspect that they still have some scripts installed and need a way to audit anything that has been changed or added in the last 10 days. Is there a command or script we can run to do this? Answer Start Over: Personally, I would have trouble sleeping at night unless I just rebuilt each sever from a fresh install. I recommend strongly you do this, hackers can hide things, and make them look like they have changed even if they have if they are good enough. Why find won't work: For example, to change the modification time: kbrandt@kbrandt: ~/scrap/touch] ls -l foo -rw-rw-r-- 1 kbrandt kbrandt 4 2010-04-05 12:22 foo [kbrandt@kbrandt: ~/scrap/touch] touch -m -t 199812130530 foo [kbrandt@kbrandt: ~/scrap/touch] ls -l foo -rw-rw-r-- 1 kbrandt kbrandt 4 1998-12-13 05:30 foo ctime might be better to search for if you go the find route, but there ma

Samba User permissions

I m trying to setup custom User permissions for a certain user on my samba server. What i try to achieve is to login with mentioned user, lets say its name is user1, and only access two folders folder1, and folder2(with rwx permissions). The share contains plenty of other folders,which he should NOT see them. All my samba users are part of a group, lets say its called staff. My global config is this : [global] log file = /var/log/samba/log.%m load printers = no socket options = TCP_NODELAY full_auditrefix = %u|%T|%m|%S full_audit:facility = local5 interfaces = 192.168.0.20/255.255.255.0 passdb backend = tdbsam allow hosts = 127. 192.168.0. 192.168.3. unix extensions = no cups options = raw vfs objects = full_audit full_audit:success = connect disconnect mkdir rmdir write sendfile rename unlink chmod fchmod chown fchown full_auditrio

Samba User permissions

I m trying to setup custom User permissions for a certain user on my samba server. What i try to achieve is to login with mentioned user, lets say its name is user1, and only access two folders folder1, and folder2(with rwx permissions). The share contains plenty of other folders,which he should NOT see them. All my samba users are part of a group, lets say its called staff. My global config is this : [global] log file = /var/log/samba/log.%m load printers = no socket options = TCP_NODELAY full_auditrefix = %u|%T|%m|%S full_audit:facility = local5 interfaces = 192.168.0.20/255.255.255.0 passdb backend = tdbsam allow hosts = 127. 192.168.0. 192.168.3. unix extensions = no cups options = raw vfs objects = full_audit full_audit:success = connect disconnect mkdir rmdir write sendfile rename unlink chmod fchmod chown fchown full_auditriority = notice workgroup = MYSERVER full_audit:failure = connect use sendfile = yes security = user max log size = 50 The shared director

apache 2.2 - Understand ssl setup

itemprop="text"> Goals: If the user support SNI and hit myurl1.server.com (https) or myurl2.server.com (https) it will match the right vhost. (the last 2 vhosts) If the user does not support SNI and hit myurl1.server.com (https) or myurl2.server.com (https) it will be catch by the fallback vhost (the first on port 443). It contains the SAN certificate and it will hit the server again to do the match. This time it will hit the last 2 vhost. If the user enter an unknown url with either http or https it will be catch in the first vhost that show a error page. I have tested all 3 goals and it's working fine. Questions: When the user is hitting the SAN vhost (https) which make a new request to it self. How does Apache know it will match the last 2 vhost (443) when the proxypass in SAN vhost is using http(80)

apache 2.2 - Understand ssl setup

Goals: If the user support SNI and hit myurl1.server.com (https) or myurl2.server.com (https) it will match the right vhost. (the last 2 vhosts) If the user does not support SNI and hit myurl1.server.com (https) or myurl2.server.com (https) it will be catch by the fallback vhost (the first on port 443). It contains the SAN certificate and it will hit the server again to do the match. This time it will hit the last 2 vhost. If the user enter an unknown url with either http or https it will be catch in the first vhost that show a error page. I have tested all 3 goals and it's working fine. Questions: When the user is hitting the SAN vhost (https) which make a new request to it self. How does Apache know it will match the last 2 vhost (443) when the proxypass in SAN vhost is using http(80) When the user is hitting the SAN vhost I can't see any requests in the SAN access log. The requests only appears in the last 2 vhost even if it goes through the SAN vhost. However I can see so

"safe" ext4 configuration for systems running unattended

itemprop="text"> I have a system running linux that must run unattended for long periods of time. The system uses industrial CF card for storage. Most of the time there are no writes to flash, although every now and then some configuration data/settings can be modified. The system must be resistant to power failures. I would like to use ext4 for this. What is the best way to configure ext4 for this kind of setup? Bearing in mind that: Performance is not a problem at all (especially write performance) Upon power loss, the system should always boot in a clean state, even if that means that data written in the last few seconds is lost If it is possible to avoid fsck, then all the better. (I am aware of this related question: href="https://serverfault.com/questions/318104/prevent-data-corruption-on-ext4-li

"safe" ext4 configuration for systems running unattended

I have a system running linux that must run unattended for long periods of time. The system uses industrial CF card for storage. Most of the time there are no writes to flash, although every now and then some configuration data/settings can be modified. The system must be resistant to power failures. I would like to use ext4 for this. What is the best way to configure ext4 for this kind of setup? Bearing in mind that: Performance is not a problem at all (especially write performance) Upon power loss, the system should always boot in a clean state, even if that means that data written in the last few seconds is lost If it is possible to avoid fsck, then all the better. (I am aware of this related question: Prevent data corruption on ext4/Linux drive on power loss ) Answer I've worked in building a system for automation on boats, and there was a prerequisite: in every moment the power could go down and everything must boostrap again correctly. My solution was to build

linux - DD copy works at terminal but not by cron

itemprop="text"> On an RHEL5.4 system I setup a script to backup a drive with dd copy every night by cron. I spit the dd output into a log file and email. It is in /etc/crontab and /var/spool/cron/root when I figured out it wouldn't even run under cron. The script is supposed to copy /dev/sda to /mnt/backup/sda.img (/mnt/backup is a mounted 250gb external). When I run it as root at the terminal it works fine, I can see data being written to the disk and sda.img is getting bigger. However when run as cron, I get the output from dd saying it copied 147gb, but cannot find where it spat that 147gb to - it didn't put it in sda.img. Its not on the filesystem anywhere as there is only 50gb left on it. Where did it go? And how can I make sure the same thing happens in cron that happens in terminal. I do stop crond and s

linux - DD copy works at terminal but not by cron

On an RHEL5.4 system I setup a script to backup a drive with dd copy every night by cron. I spit the dd output into a log file and email. It is in /etc/crontab and /var/spool/cron/root when I figured out it wouldn't even run under cron. The script is supposed to copy /dev/sda to /mnt/backup/sda.img (/mnt/backup is a mounted 250gb external). When I run it as root at the terminal it works fine, I can see data being written to the disk and sda.img is getting bigger. However when run as cron, I get the output from dd saying it copied 147gb, but cannot find where it spat that 147gb to - it didn't put it in sda.img. Its not on the filesystem anywhere as there is only 50gb left on it. Where did it go? And how can I make sure the same thing happens in cron that happens in terminal. I do stop crond and start it before and after the backup, however I am under the impression that cron kicks the job off, I shut it down, it backs up, starts again and is on its merry way. Thanks. EDIT: S

domain name system - Is it possible to simplify my website DNS records?

itemprop="text"> Let's say I have mydomain.com and my shared webhost server is 1.2.3.4. My domain registrar and webhosting company are 2 different entities. Currently I have my DNS records configured like this in my registrar DNS Management section: A *.mydomain.com 1.2.3.4 A mydomain.com 1.2.3.4 CNAME ftp.mydomain.com mydomain.com CNAME www.mydomain.com mydomain.com A subdomain1.mydomain.com 1.2.3.4 A subdomain2.mydomain.com 1.2.3.4 A subdomainN.mydomain.com 1.2.3.4 I have 2 questions/problems: 1) Is there any way to simplify the sub-domain A records so that every time I need to add a new sub-domain to my website I shouldn't have to create a new A record for it in the DNS Management? 2) With the current configuration when the user points to any sub-domain that don't exist (for instance: idontexist.mydomain.com) a defa

domain name system - Is it possible to simplify my website DNS records?

Let's say I have mydomain.com and my shared webhost server is 1.2.3.4. My domain registrar and webhosting company are 2 different entities. Currently I have my DNS records configured like this in my registrar DNS Management section: A *.mydomain.com 1.2.3.4 A mydomain.com 1.2.3.4 CNAME ftp.mydomain.com mydomain.com CNAME www.mydomain.com mydomain.com A subdomain1.mydomain.com 1.2.3.4 A subdomain2.mydomain.com 1.2.3.4 A subdomainN.mydomain.com 1.2.3.4 I have 2 questions/problems: 1) Is there any way to simplify the sub-domain A records so that every time I need to add a new sub-domain to my website I shouldn't have to create a new A record for it in the DNS Management? 2) With the current configuration when the user points to any sub-domain that don't exist (for instance: idontexist.mydomain.com) a default page from cPanel is displayed. I suppose this is the normal behavior? Or should it return a 404 error? If so, how can I make it return a 404 error? Answer

domain name system - Intranet BIND server fallback to internet DNS servers?

itemprop="text"> On our local small business network, we have a need to run a intranet-only BIND (named) DNS server for only local, intranet-only addresses. For example, we do a lot of web development on the network, so having a DNS server to manage local addresses (example: testsite3.local) is beneficial. One aspect of this we are unsure of is, currently all the business desktops have their DNS servers set to 75.75.75.75 and 75.75.76.76 , which are Comcast internet DNS servers (Comcast is our business ISP). So if we change the computer's DNS to point instead to our local DNS server, how do you setup bind to "forward" all failed requests out to the Comcast DNS servers? For example, if someone on the network trys to visit www.google.com , their computer will first check with our local DNS server, which obv

domain name system - Intranet BIND server fallback to internet DNS servers?

On our local small business network, we have a need to run a intranet-only BIND (named) DNS server for only local, intranet-only addresses. For example, we do a lot of web development on the network, so having a DNS server to manage local addresses (example: testsite3.local) is beneficial. One aspect of this we are unsure of is, currently all the business desktops have their DNS servers set to 75.75.75.75 and 75.75.76.76 , which are Comcast internet DNS servers (Comcast is our business ISP). So if we change the computer's DNS to point instead to our local DNS server, how do you setup bind to "forward" all failed requests out to the Comcast DNS servers? For example, if someone on the network trys to visit www.google.com , their computer will first check with our local DNS server, which obviously doesn't have internet DNS records in it like google.com . So then, either the computer needs to know to check with the secondary or tertiary DNS servers OR can the local DNS s

linux - ssh: transparent redirect of incoming connections based on host- or username

I have a Linux server with a single, static IPv4 address and several DNS entries pointing to the IP. The server hosts several Docker container, and listens on port 22 for incoming SSH connections. There are three different use cases, where SSH connections to the server are made: access to git repositories ( git.myserver.tld , username is always git , access is realized by using different keys) access to files using sftp ( data.myserver.tld , username is always data , access is realized by using different keys) direct access to the server ( myserver.tld , username corresponds to local unix user) The first two servers (git and data) are each running inside a Docker container. My question is: Is it somehow possible to redirect incoming SSH connections to the SSH servers of the Docker containers, if Git or Data i

linux - ssh: transparent redirect of incoming connections based on host- or username

I have a Linux server with a single, static IPv4 address and several DNS entries pointing to the IP. The server hosts several Docker container, and listens on port 22 for incoming SSH connections. There are three different use cases, where SSH connections to the server are made: access to git repositories ( git.myserver.tld , username is always git , access is realized by using different keys) access to files using sftp ( data.myserver.tld , username is always data , access is realized by using different keys) direct access to the server ( myserver.tld , username corresponds to local unix user) The first two servers (git and data) are each running inside a Docker container. My question is: Is it somehow possible to redirect incoming SSH connections to the SSH servers of the Docker containers, if Git or Data is required, or handle it directly if not? Could this be realized by looking at the username (redirect if it is git or data , handle it otherwise) or the hostname (is there some eq

Microsoft SQL Server login using Active Directory Credentials

itemprop="text"> Our Microsoft SQL Servers are running on Windows Servers which are part of an Active Directory domain. For easy user management, our SQL authorization is set up by using the Active Directory User Groups as explained in this href="https://stackoverflow.com/questions/5029014/how-to-add-active-directory-user-group-as-login-in-sql-server">post . Now this works fine as long as everyone is working inside the domain. People login to their computer using their AD credentials and can connect to the SQL server by using the "Windows Authentication". Problem is that our users will also be working on other client computers which are not part of the Active Directory domain (and adding them to the domain is not an option). I was hoping they could simply keep using their AD credentials to login t