Skip to main content

Posts

Showing posts from December, 2019

domain name system - Dig DNS vs Whois

I have a client that has their name services done through some third part. The domain was registered through Godaddy (yet a different third party). The name service provider had their domain names hijacked (don't know how) so the name servers my client uses became unreachable or compromised. Some how the name service provided was able to get the DNS to resolve using different name servers. I am baffled. How is it that the service provider can alter the name servers for the client's domain? Abbreviated explaination: *Godaddy Registrar for example.com lists ns2.ispnameserver.com and ns2.ispnameserver.com as name servers. *ISP provides name server for example.com, manages ns1.ispnameserver.com and ns2.ispnameserver.com ISP loses control of *ispnameserver.com Somehow ISP is able to provide new name servers ns1.newisp

domain name system - Dig DNS vs Whois

I have a client that has their name services done through some third part. The domain was registered through Godaddy (yet a different third party). The name service provider had their domain names hijacked (don't know how) so the name servers my client uses became unreachable or compromised. Some how the name service provided was able to get the DNS to resolve using different name servers. I am baffled. How is it that the service provider can alter the name servers for the client's domain? Abbreviated explaination: *Godaddy Registrar for example.com lists ns2.ispnameserver.com and ns2.ispnameserver.com as name servers. *ISP provides name server for example.com, manages ns1.ispnameserver.com and ns2.ispnameserver.com ISP loses control of *ispnameserver.com Somehow ISP is able to provide new name servers ns1.newispname.com ns2.newispname.com and magically DNS uses ns1.newispname.com and ns2.newispname.com to resolve queries for example.com. In essense ISP was able to high

/32 subnets on Ethernet via DHCP

itemprop="text"> Is it possible to assign to an ethernet host via DHCP a subnet mask of only the host itself, e.g. 192.168.1.123/32? Do common operating systems support this kind of configuration? I'd like for the hosts to send all of their traffic to the router (and not directly to some other host on the same segment), but still for them to be able to communicate (so no "client isolation"); effectively creating a point-to-point link, but without any client-side configuration. Update: My intention is to configure a home router running dd-wrt so that all the traffic has to pass through the IP stack on the router, so it can be filtered by some ipfilter rules. I'd hoped for a general solution, some standard way to implement point-to-point Ethernet connections that still can be automatically configured by DHCP for al

/32 subnets on Ethernet via DHCP

Is it possible to assign to an ethernet host via DHCP a subnet mask of only the host itself, e.g. 192.168.1.123/32? Do common operating systems support this kind of configuration? I'd like for the hosts to send all of their traffic to the router (and not directly to some other host on the same segment), but still for them to be able to communicate (so no "client isolation"); effectively creating a point-to-point link, but without any client-side configuration. Update: My intention is to configure a home router running dd-wrt so that all the traffic has to pass through the IP stack on the router, so it can be filtered by some ipfilter rules. I'd hoped for a general solution, some standard way to implement point-to-point Ethernet connections that still can be automatically configured by DHCP for all commons operating systems. Based on the responses so far, this doesn't seem to be that easy; I'll read some more about VLANs and then reconsider my plans.

apache 2.2 - folder permissions for linux webhost (ubuntu)

itemprop="text"> I have a VPS running ubuntu 12.04 that is hosting about 9 sites which are a mix of wordpress and joomla (latest versions). Apache runs under www-data and sites are hosted as virtualhost under /var/www/{sites} Today I disabled FTP (insecure) and replaced it with SFTP. I've set this up so that every site/virtualhost has a SFTP user that is chrooted. when you login as SFTP user you get chrooted into /usr/local/chroot/{user}, in this directory i've mounted the specific virtualhost this user needs access to (with mount --bind /var/www/{subdomain} /usr/local/chroot/{user}/web) the SFTP users are setup so that they can only SFTP and not SSH and their shell is set to /sbin/false the only problem/question I have is with the user permissions. the sites run as www-data so when I upload something in for

apache 2.2 - folder permissions for linux webhost (ubuntu)

I have a VPS running ubuntu 12.04 that is hosting about 9 sites which are a mix of wordpress and joomla (latest versions). Apache runs under www-data and sites are hosted as virtualhost under /var/www/{sites} Today I disabled FTP (insecure) and replaced it with SFTP. I've set this up so that every site/virtualhost has a SFTP user that is chrooted. when you login as SFTP user you get chrooted into /usr/local/chroot/{user}, in this directory i've mounted the specific virtualhost this user needs access to (with mount --bind /var/www/{subdomain} /usr/local/chroot/{user}/web) the SFTP users are setup so that they can only SFTP and not SSH and their shell is set to /sbin/false the only problem/question I have is with the user permissions. the sites run as www-data so when I upload something in for example joomla the file is created with www-data:www-data ownership. however if i upload a file using SFTP it's uploaded as {user}:www-data. this can lead to problems with permissions.

security - Why should I firewall servers?

itemprop="text"> PLEASE NOTE: I'm not interested in making this into a flame war! I understand that many people have strongly-held beliefs about this subject, in no small part because they've put a lot of effort into their firewalling solutions, and also because they've been indoctrinated to believe in their necessity. However, I'm looking for answers from people who are experts in security. I believe that this is an important question, and the answer will benefit more than just myself and the company I work for. I've been running our server network for several years without a compromise, without any firewalling at all. None of the security compromises that we have had could have been prevented with a firewall. I guess I've been working here too long, because when I say "servers", I

security - Why should I firewall servers?

PLEASE NOTE: I'm not interested in making this into a flame war! I understand that many people have strongly-held beliefs about this subject, in no small part because they've put a lot of effort into their firewalling solutions, and also because they've been indoctrinated to believe in their necessity. However, I'm looking for answers from people who are experts in security. I believe that this is an important question, and the answer will benefit more than just myself and the company I work for. I've been running our server network for several years without a compromise, without any firewalling at all. None of the security compromises that we have had could have been prevented with a firewall. I guess I've been working here too long, because when I say "servers", I always mean "services offered to the public", not "secret internal billing databases". As such, any rules we would have in any firewalls would have to allow access t

apache 2.2 - Server Freeze Up Under Load

I'm having a problem with a debian server that I thought was due to bad RAM, but is persisting. It's a Dell Poweredge 6800 with two dual-core 3.6GHZ Xeon processors and 5GB of DDR2 ECC 333. I've got a single 73GB SCSI Drive. I'm working it to death right now, pulling records from MySQL to build asterisk .call files (small text files) which trigger SIP calls. We manage it via a cgi interface, and the system is also running citadel for our mail, but we have less than five users. It's not a huge drain. My peak usage seems to be about 460 calls per minute. Load hovers between 2.0 - 4.3, if I push it past that, it spikes to >22.0. The problem I'm having is that, about an hour into a dial, it's freezing up on me. Last night I started it at 5:59, and at 6:55:17 seconds, the system became non-responsive.

apache 2.2 - Server Freeze Up Under Load

I'm having a problem with a debian server that I thought was due to bad RAM, but is persisting. It's a Dell Poweredge 6800 with two dual-core 3.6GHZ Xeon processors and 5GB of DDR2 ECC 333. I've got a single 73GB SCSI Drive. I'm working it to death right now, pulling records from MySQL to build asterisk .call files (small text files) which trigger SIP calls. We manage it via a cgi interface, and the system is also running citadel for our mail, but we have less than five users. It's not a huge drain. My peak usage seems to be about 460 calls per minute. Load hovers between 2.0 - 4.3, if I push it past that, it spikes to >22.0. The problem I'm having is that, about an hour into a dial, it's freezing up on me. Last night I started it at 5:59, and at 6:55:17 seconds, the system became non-responsive. Nothing was logged, I couldn't connect via ssh or http, it responded to ping, and nmap showed open ports which I was able to telnet to, but not elicit any re

linux - How do I apply multiple subnets to a server with one NIC?

itemprop="text"> I am trying to route multiple IPs through one physical NIC on my dedicated server for use with Proxmox KVM VMs. I have a dedicated server which is currently running Debian 4.4.5-8 with 3 available ip addresses for use, which will be displayed as 176.xxx.xxx.196 (main), 176.xxx.xxx.198 (on same subnet as main) and 5.xxx.xxx.166 (different subnet). I am currently trying to route the third IP address with the dedi for use with a vps that I have set up using proxmox v2.x but am having a really, really hard time doing so. Virtual interfaces binding the additional IP addresses work as expected, ruling out external routing problems. /> The provider has given the following information for the IP addresses on the main subnet: gateway: 176.xxx.xxx.193 netmask: 255.255.255.224 broadcast: 176.xxx.xxx.223 As well

linux - How do I apply multiple subnets to a server with one NIC?

I am trying to route multiple IPs through one physical NIC on my dedicated server for use with Proxmox KVM VMs. I have a dedicated server which is currently running Debian 4.4.5-8 with 3 available ip addresses for use, which will be displayed as 176.xxx.xxx.196 (main), 176.xxx.xxx.198 (on same subnet as main) and 5.xxx.xxx.166 (different subnet). I am currently trying to route the third IP address with the dedi for use with a vps that I have set up using proxmox v2.x but am having a really, really hard time doing so. Virtual interfaces binding the additional IP addresses work as expected, ruling out external routing problems. The provider has given the following information for the IP addresses on the main subnet: gateway: 176.xxx.xxx.193 netmask: 255.255.255.224 broadcast: 176.xxx.xxx.223 As well as the following information for the IP address on the second subnet: gateway: 5.xxx.xxx.161 netmask: 255.255.255.248 broadcast: 5.xxx.xxx.167 Everything I've tried with /etc/network/inte

domain name system - What's the difference between FQDNs and Hostnames

First of all, I wanna say that I know how DNS Servers work and also I know about the records in a DNS Zone File! I'm not an expert but I have some knowledge about them. I'm confused! why?! because I don't know what's the exact difference between hostnames and FQDNs. I know that some thing like my.example.com. is a FQDN. somebody told me it's a FQDN because of the existance of "dot" at the end of it. I think the final dot at the end of it reffers to the root dns servers in the world, right? Is my.example.com a hostname? (note that it's doesn't have any final dots!) maybe "my" is the hostname?! Actually i'm confused! What's the difference between a hostname and a FQDN, guys?! Sorry, My english isn't very good. excuse me if I made a mistake. Thanks in advance.

domain name system - What's the difference between FQDNs and Hostnames

First of all, I wanna say that I know how DNS Servers work and also I know about the records in a DNS Zone File! I'm not an expert but I have some knowledge about them. I'm confused! why?! because I don't know what's the exact difference between hostnames and FQDNs. I know that some thing like my.example.com. is a FQDN. somebody told me it's a FQDN because of the existance of "dot" at the end of it. I think the final dot at the end of it reffers to the root dns servers in the world, right? Is my.example.com a hostname? (note that it's doesn't have any final dots!) maybe "my" is the hostname?! Actually i'm confused! What's the difference between a hostname and a FQDN, guys?! Sorry, My english isn't very good. excuse me if I made a mistake. Thanks in advance.

apache 2.2 - Apache2 PHP Notice errors logged

My /var/log/apache2/error.log is filling up about 1GB per hour with PHP Notice errors. I've tried adding this: /apache2.conf php_value error_log none And in my /cgi/php.ini: error_reporting = E_ERROR display_errors = On display_startup_errors = Off log_errors = Off PHP is running through fcgi. Even though display errors is ON, it is NOT displaying errors. Is there a seperate config file I should be editting? OS: Ubuntu Linux 10.04 PHP: 5.3.2 Apache: 2.2.14

apache 2.2 - Apache2 PHP Notice errors logged

My /var/log/apache2/error.log is filling up about 1GB per hour with PHP Notice errors. I've tried adding this: /apache2.conf php_value error_log none And in my /cgi/php.ini: error_reporting = E_ERROR display_errors = On display_startup_errors = Off log_errors = Off PHP is running through fcgi. Even though display errors is ON, it is NOT displaying errors. Is there a seperate config file I should be editting? OS: Ubuntu Linux 10.04 PHP: 5.3.2 Apache: 2.2.14

ftp - Are there any free software to test web server?

I wrote own http server. Are there any free software, test packages or toolset to validate whether it complies fully or partially with HTTP 1.0 (rfc 1945). And moreover it'd great if this software could estimate http performance and check for potential security issues. The same is wanted from this software in respect of FTP compliance validation. itemprop="text"> class="normal">Answer Lots of questions here. While I'm sure lots of people will tell you their tool does everything if only you will buy it, there are very few tools available which make a reasonable attempt at any one of these. For the security side of things, assuming that you are only interested in serving of static content, there is a list of useful software href="http://www.dragoslungu.com/2007/05/12/my-favorite-10-web-applica

ftp - Are there any free software to test web server?

I wrote own http server. Are there any free software, test packages or toolset to validate whether it complies fully or partially with HTTP 1.0 (rfc 1945). And moreover it'd great if this software could estimate http performance and check for potential security issues. The same is wanted from this software in respect of FTP compliance validation. Answer Lots of questions here. While I'm sure lots of people will tell you their tool does everything if only you will buy it, there are very few tools available which make a reasonable attempt at any one of these. For the security side of things, assuming that you are only interested in serving of static content, there is a list of useful software here . For capacity testing you could use ab which ships with apache. You might also consider scripting more complex interactions using loadrunner ($$$) or http::Recorder and www::mechanize Most of the large software packages available as source code come with automatic

Small web server hardware

Our small company is planning on setting up a web server. The website will have a SQL Server Express back end, and will probably be used a few times daily by about 10 employees logging in from the field. It will run 24x7, but occasional interruptions of a couple of hours or a whole weekend will not be a problem. We've been happy with the Dell desktops and laptops we've used, so were thinking of going with Dell again. When I checked out their website, I found a Small Office desktop Vostro with a dual core CPU and 2gb RAM for $399 (Canadian). An equivalently configured Poweredge 100 server was nearly double the price. I know that the Vostro will run Windows Server 2003, having already installed it on another Vostro. I realize it's not a huge amount of money, but why the big price difference? What would you recommend we purchase?

Small web server hardware

Our small company is planning on setting up a web server. The website will have a SQL Server Express back end, and will probably be used a few times daily by about 10 employees logging in from the field. It will run 24x7, but occasional interruptions of a couple of hours or a whole weekend will not be a problem. We've been happy with the Dell desktops and laptops we've used, so were thinking of going with Dell again. When I checked out their website, I found a Small Office desktop Vostro with a dual core CPU and 2gb RAM for $399 (Canadian). An equivalently configured Poweredge 100 server was nearly double the price. I know that the Vostro will run Windows Server 2003, having already installed it on another Vostro. I realize it's not a huge amount of money, but why the big price difference? What would you recommend we purchase? Answer Another thing to consider is the amount and quality of guarantee each buy offers. Typically servers have by defaul

iis 7 - Is it possible to use WebMatrix with pure IIS?

itemprop="text"> I'd like to check out href="http://www.microsoft.com/web/webmatrix/" rel="nofollow noreferrer">WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings . I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on

iis 7 - Is it possible to use WebMatrix with pure IIS?

I'd like to check out WebMatrix for publishing our site to IIS automatically (right now, I have to zip it up, copy it out, Remote Desktop into the server, unzip it, etc). However, every example I can find on how to setup WebMatrix involves Azure, or using a .publishsettings file that you'd get from your hosting provider. I'm curious if I can publish to a normal, every day IIS server running on Windows Server 2008. So far, all I've done to the IIS server is install Web Deploy, which I believe is the protocol that WebMatrix uses to publish. When I enter the Remote Site Settings screen, I select Enter settings . I select Web Deploy as the protocol, type in my NT domain credentials (I'm an Admin on that server). I put in the site URL for the Site Name and Destination URL . When I click Validate Connection , I get: Am I doing something wrong, or is this just not possible to do? Answer You will have to enable it on the server. If you are on IIS 7 yo

domain name system - How much does the geographical location of DNS servers matter?

We have started to run our own DNS servers located in Asia since that's where our main audience is. However, it seems that some users in the US are having difficulties accessing our website sometimes. I've noticed myself that DNS lookups of our domain from the US are relatively slow (500 msec+). Maybe the problems some users are having are due to other DNS configuration errors, but in general, how much of an issue is the geographical location of DNS servers? Should we have an additional server in the US?

domain name system - How much does the geographical location of DNS servers matter?

We have started to run our own DNS servers located in Asia since that's where our main audience is. However, it seems that some users in the US are having difficulties accessing our website sometimes. I've noticed myself that DNS lookups of our domain from the US are relatively slow (500 msec+). Maybe the problems some users are having are due to other DNS configuration errors, but in general, how much of an issue is the geographical location of DNS servers? Should we have an additional server in the US?

Can SAS drives be used with SATA controllers?

itemprop="text"> Can I buy SAS drives on my SATA Controller? What is the compatibility and restrictions about mixing these? class="post-text" itemprop="text"> class="normal">Answer From rel="noreferrer">Wikipedia : SAS offers backwards-compatibility with second-generation SATA drives. SATA 3 Gbit/s drives may be connected to SAS backplanes, but SAS drives may not be connected to SATA backplanes.

Can SAS drives be used with SATA controllers?

Can I buy SAS drives on my SATA Controller? What is the compatibility and restrictions about mixing these? Answer From Wikipedia : SAS offers backwards-compatibility with second-generation SATA drives. SATA 3 Gbit/s drives may be connected to SAS backplanes, but SAS drives may not be connected to SATA backplanes.

Trying to install Mysql on Ubuntu Server 12.04

itemprop="text"> I am currenty trying to install MySql on my Ubuntu 12.04 server. But I got problems, When i run sudo apt-get install mysql-server it runs, ask me for a Yes but then. It returns Temporart failure resolving, Failed to fetch. I am using PuTTY to manage the server, but I can access it physically. This is what I get when i run the command. root@cloud:/home/tek/openstackgeek# sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18 libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Suggested packages: libipc-sharedcache-perl libterm-readkey-per

Trying to install Mysql on Ubuntu Server 12.04

I am currenty trying to install MySql on my Ubuntu 12.04 server. But I got problems, When i run sudo apt-get install mysql-server it runs, ask me for a Yes but then. It returns Temporart failure resolving, Failed to fetch. I am using PuTTY to manage the server, but I can access it physically. This is what I get when i run the command. root@cloud:/home/tek/openstackgeek# sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18 libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Suggested packages: libipc-sharedcache-perl libterm-readkey-perl tinyca mailx The following NEW packages will be installed libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18 libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-cl

Other domains on shared server showing my webpages

I own a website (gosportweather.co.uk) which is hosted by the same company as other websites on a shared server. The problem is I am noticing that these other websites appear to be showing my webpages under their domain names. At first I thought someone had copied my pages however running the domains through whois shows them all as being on the same ip with the same name servers as my site and so I can't help but wonder if my host has got something wrong at their end. When attempting to visit the sites chrome gives the following warning: NET::ERR_CERT_COMMON_NAME_INVALID This server could not prove that it is www.example.com; its security certificate is from www.gosportweather.co.uk. This may be caused by a misconfiguration or an attacker intercepting your connection. I have changed the domain in the error above from the origina

Other domains on shared server showing my webpages

I own a website (gosportweather.co.uk) which is hosted by the same company as other websites on a shared server. The problem is I am noticing that these other websites appear to be showing my webpages under their domain names. At first I thought someone had copied my pages however running the domains through whois shows them all as being on the same ip with the same name servers as my site and so I can't help but wonder if my host has got something wrong at their end. When attempting to visit the sites chrome gives the following warning: NET::ERR_CERT_COMMON_NAME_INVALID This server could not prove that it is www.example.com; its security certificate is from www.gosportweather.co.uk. This may be caused by a misconfiguration or an attacker intercepting your connection. I have changed the domain in the error above from the original to example.com in case it is a legit site as I don't want to damage their reputation. The certificates all appear to have expired but I am confused as

permissions - linux/setfacl - Set all current/future files/directories in parent directory to 775 with specified owner/group

itemprop="text"> I have a directory called "members" and under it there are folders/files. How can I recursively set all the current folders/files and any future ones created there to by default have 775 permissions and belong to owner/group nobody/admin respectively? I enabled ACL, mounted, but can't seem to get the setfacl command to do this properly. Any idea how to accomplish this? itemprop="text"> class="normal">Answer I actually found something that so far does what I asked for, sharing here so anyone who runs into this issue can try out this solution: sudo setfacl -Rdm g:groupnamehere:rwx /base/path/members/ sudo setfacl -Rm g:groupnamehere:rwx /base/path/members/ R is recursive, which means everything under that directory will have the rule applied to

permissions - linux/setfacl - Set all current/future files/directories in parent directory to 775 with specified owner/group

I have a directory called "members" and under it there are folders/files. How can I recursively set all the current folders/files and any future ones created there to by default have 775 permissions and belong to owner/group nobody/admin respectively? I enabled ACL, mounted, but can't seem to get the setfacl command to do this properly. Any idea how to accomplish this? Answer I actually found something that so far does what I asked for, sharing here so anyone who runs into this issue can try out this solution: sudo setfacl -Rdm g:groupnamehere:rwx /base/path/members/ sudo setfacl -Rm g:groupnamehere:rwx /base/path/members/ R is recursive, which means everything under that directory will have the rule applied to it. d is default, which means for all future items created under that directory, have these rules apply by default. m is needed to add/modify rules. The first command, is for new items (hence the d), the second command, is for old/existing items un

.htaccess 301 redirect PLUS url rewrite

Premise: My photography website is www.domain.com and my blog is on www.domain.com/blog. Everything is on self hosted wordpress platform. On my index page I have my photo galleries. I now would like to use a third party service to host my galleries at www.galleries.com/me. What I would like to achieve: when someone visits www.domain.com it gets redirected 301 to www.galleries.com/me but keeps my domain visible in url address when someone visits www.domain.com/blog everything stays as usual and nothing changes. What I thought: usa redirect rule PLUS a rewrite rule on my .htaccess Questions: Is it possible? How can i do it? How does it affects SEO? I tried to make it as simple as I could :) Thanks