Skip to main content

Posts

Showing posts from May, 2015

debian - Slab uses 88Gb of 128Gb available. What could cause this?

We run a debian 2.6.26-2-amd64 x86_64 GNU/Linux on a server with 128 Gb. Recently it our available memory became rather low. Looking at the /proc/meminfo showed that the Slab was using 88Gb, which is counted in the used memory off course. Is this a problem? I suspect that memory will be freed when necessary, but I don't know if that could have unwanted side effects. Why would Slab need that much memory? Is there a clear cause for that? can we avoid this to happen in the future? How can we free this memory? thank you in advance > cat /proc/meminfo MemTotal: 132304500 kB MemFree: 26669388 kB Buffers: 237504 kB Cached: 11881136 kB SwapCached: 48 kB Active: 5244640 kB Inactive: 11714308 kB SwapTotal: 5751228 kB SwapFree: 5750436 kB Dirty: 24 kB Writeback: 0 kB AnonPages: 4840256 kB Mapped: 163968 kB Slab: 88314840 kB SReclaimable: 88275644 kB SUnreclaim: 39196 kB PageTables: 80852

filesystems - Cannot mount cdrom in Linux due to "I/O error"

This is a most puzzling error, and I can't seem to find anyone else with quite the same problem. I use a Sony Vaio VGN-FE890 laptop running Arch Linux kernel 2.6.30-ARCH. Inserting a cd into the optical drive makes it spin for a bit, then do nothing. Running dmesg returns the following: cdrom: This disc doesn't have any tracks I recognize! sr 0:0:0:0: [sr0] Result: hostbyte=0x00 driverbyte=0x08 sr 0:0:0:0: [sr0] Sense Key : 0x5 [current] sr 0:0:0:0: [sr0] ASC=0x21 ASCQ=0x0 end_request: I/O error, dev sr0, sector 0 Buffer I/O error on device sr0, logical block 0 The device is /dev/sr0 and running 'sudo mount -t iso9660 /dev/sr0 /media/cdrom' returns: mount: block device /dev/sr0 is write-protected, mounting read-only mount: wrong fs type, bad option, bad superblock on /dev/sr0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases usefu

exim, variable string expansion

How can I get the list of local domains from a forward file? An example of /etc/exim4/forwards : a@test.com: a@lala.com # ignore this line b@test.com: a@example.com b@hugo.com: hugo@example.com Here the string expansion (or what ever it's called g ) should return test.com : hugo.com . I assume it can be done with readfile and map , but I can't get it to work.

linux - Correct Permissions VPS /var/www

This is a repost of an "off-topic" question on stackoverflow. My scenario is: I created a user and added that user to sudoers: visudo user ALL=(ALL) ALL Then sudo adduser user www-data and chown www-data:www-data -R /var/www Did a service restart, then tried: scp file user@ip:/var/www Permission denied The permissions I had applied for folders then files were (not wp-config.php or .htaccess ): drwxr-xr-x -rw-rw-r-- I tried: sudo chmod -R g+w /var/www I was then able upload files to /var/www , but this set permissions to 775 , so I ran: find /var/www -type d -exec chmod 755 {} \; Now I can edit files but not write to the folder via SFTP or SSHFS etc. My question now is: How do I write to /var/www without compromising security? Answer The sudo change affects only commands that you run with sudo command. Therefore it has no effect in this case. I would prefer making user the owner of all files in /var/www . Then, you can chmod 777 all the directories and ch

linux networking - KVM guest can't connect to itself after DNAT

Virtual hosting environment (KVM): Guest: Ubuntu 14.04.5 LTS \n \l Linux ari 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 15:31:16 UTC 2013 i686 i686 i686 GNU/Linux Host: Ubuntu 14.04.3 LTS \n \l Linux host 3.13.0-74-generic #118-Ubuntu SMP Thu Dec 17 22:52:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Network: eth0 |----------| virbr63 eth0 |----------| ---------------| HOST |---------------------------------| ari | 11.22.33.44 |----------| 192.168.63.1 192.168.63.2 |----------| 11.22.33.44 is the public IP address ari is a virtual machine (guest) HOST is a physical machine (virtual machine host) eth0 is a physical network card in HOST virbr63 is a virtual network adapter There is an iptables rule on HOST: -I PREROUTING -p tcp -d 11.22.33.44 --dport 80 -j DNAT --to 192.168.63.2:8888 Let's say mydomain.com resolves to 11.22.33.44. ari is serving all the HTTP requests incoming on 11.22.33.44. Curl-ing mydomain.com works from

cpu usage - Linux Load Averages and HyperThreads

My rough understanding of a Linux load Average is that for every integer a CPU core is working all the time. For example, 1 means on a 4 core system that 1 core is working at capacity. How does HyperThreading factor into this? Is it even considered in load averages? Answer I generally think of a HyperThreaded core as being 20%-30% of a real core, depending on how effectively your application can leverage multiple threads. They are considered in load average and load average thresholds. Here's an example of a dual-socket Intel X5570 Nehalem system before and after enabling Hyperthreading. The OS is CentOS 5.8. The actual average system run queue/load average did not change substantially (the app is pretty much single-threaded), but the load threshold did. That said, there are many times when I disable HyperThreading... For my low-latency and deterministic applications, I want finer control of where application resources are scheduled. There's a penalty for going t

windows server 2008 - Active Directory Health Checks

I've had some Active Directory troubles lately was was wondering what checks I could do on a regular basis I could do to ensure everything is working optimally? Answer At a smaller company I worked for in the past we used this . It is a script that compares PASS/FAILS, certainly not a bad tool to try out. Interested to see what others have used.

linux - Open Source Software that combines Snort and Cisco Blocked Lists

There used to be this software that existed maybe 8-10 years ago (I thought it was maybe called FloodGate?) It worked by having a Linux box bridge running snort or prelude. When it detected a DOS or DDOS attack (and presumably others), it would actually connect to your Cisco routers and block the source IP at the router. Has anyone heard of this software or know where I can get it? Answer I think you talking snortsam http://www.snortsam.net/

nat - How to access server using public ip when in the network itself?

I've asked this question and even searched around but didn't get a useful answer for me. Basically what im doing is i have a webserver on internal ip 192.168.0.100 port 80. So if im in the network it would be accessible if i type in 192.168.0.100/myportal/login.php Ok no prob so far. Now, i would like for internal network users to access it via our public static ip which is 219.92.xx.xxx/myportal/login.php If im outside of this network, no problem.i can access it. But how do i make it so that if im in the internal network, i can use the public ip? now it's not practical because i have to use two different address depending on my network situation. Why i want this? simple. because i want to buy a domain name and use it with my public ip which im hosting my own webserver. so now i cant access using public ip inside, i wont be able to use my domain later assigned to that ip. for example, i wont be able to access it via www.vportal.com/myportal/login.php if im inside the networ

web server - mod_rewrite and Apache questions

We have an interesting situation in relation to some help desk software that we are trying to setup. This is a web based software application that allows customers and staff to log into it and access tickets and supply updates, etc. The challenge we are having deals with the two different domains that we use and the mod_rewrite rules to make it all work with our SSL certificate that is only bound to one of the domains. I will list the use case scenarios below and the challenges that we are having. If you access http://support.domain1.com/support then it redirects fine to https://support.domain2.com/support If you access http://support.domain2.com/support then it redirects fine to https://support.domain2.com/support If you access https://support.domain1.com/support then it throws an error of "server cannot be found" If you access https://support.domain1.com/support/ after having visited https://support.domain2.com/support then you are presented with a "this connection

linux - How to check which DNS Alias is used to access the server?

I have a host with a database installed on it. In order to access the database, the clients use a DNS Alias (CNAME) - AliasOne - and the database port. DNS Aliases are managed by colleagues in another department. I had to request a more explicit alias - AliasTwo - for the same host. Now: I need to delete AliasOne : how can I check which Alias is used to connect on the host/to the database ? I don't get this information in the database logs. I tried to create an iptables rule : iptables -A INPUT -p tcp -s AliasOne --dport 3306 -j LOG --log-prefix "AliasOne: " --log-tcp-options --log-ip-options but it translates the alias with its hostname, thus, it will also log the connection attempts of AliasTwo. I didn't manage to get the answer with nslookup, dig, last. Am I using them wrong ? Any clue anybody ? (I don't want a service interruption, so "delete and see if somebody complains" is not an option ;)) Answer Unless the protocol explicitly

Centos 7 - sshd sftp group permissions messed up after update

Centos recently updated to 7.3 and there's been problems with sshd sftp group permissions. I have one user that is chrooted to it's home directory, and that user is in group sftponly . Then i have the /var/www directory, which has 775 permissions and owner is apache and owner group is sftponly . I have a bind link pointing from /home/user/files/web --> /var/www , so the user can access /var/www even though being chrooted to it's home directory. I can view files in /var/www with that user, but impossible to edit or add anything. This worked fine before the big Centos 7.3 update, and now it's stopped working. Any ideas? Answer This is a known bug and it will be fixed in the next update. Before that, it is good to stay on the previous version.

domain name system - Why does DNS work the way it does?

This is a Canonical Question about DNS (Domain Name Service). If my understanding of the DNS system is correct, the .com registry holds a table that maps domains (www.example.com) to DNS servers. What is the advantage? Why not map directly to an IP address? If the only record that needs to change when I am configuring a DNS server to point to a different IP address, is located at the DNS server, why isn't the process instant? If the only reason for the delay are DNS caches, is it possible to bypass them, so I can see what is happening in real time? Answer Actually, it's more complicated than that - rather than one "central registry (that) holds a table that maps domains (www.mysite.com) to DNS servers", there are several layers of hierarchy There's a central registry (the Root Servers) which contain only a small set of entries: the NS (nameserver) records for all the top-level domains - .com , .net , .org , .uk , .us , .au , and so on. Those serve

spam filter - Exchange 2013 - Remove warning text from outgoing email body

I created a transport-rule in our Exchange server 2013 where it will add a warning text on top of email-body to all external incoming emails. This is to alert employees about potential risks in external emails when it has website-links and attachments which may be harmful. The text is as follows: Text CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. Now, when user will reply to the email, I want it to be remove when Exchange process it to send. How can I remove the warning text from outgoing emails in Exchange? I was looking for something in rules, but there is none I could find. Any help will be appreciated. Thanks.

domain name system - Why should one have a secondary DNS server?

I'm very confused. I basically understand how DNS works. Here's an example that helps illustrate what I'm having trouble understanding. Right now, I run a small web-server. I use my provider's DNS manager, so I don't have a DNS server hosted on the machine. Let's say for a second, that I don't use my host's DNS, and I decide to set up a DNS server on my server. Hypothetical scenario: my server (entire) server goes down - DNS included. Why do I need backup DNS? If the server is down, who cares if the DNS server is down too, considering that even if I had DNS up (it wasn't on the crashed server), it wouldn't be able to forward requests anyway since the server would be down? Is the point of having secondary DNS, to be able to change the IP addresses that your DNS server points to, so if your webserver was down, you could redirect traffic to a backup? How would you switch to the secondary provider, in the event that your main DNS provider becomes una

exchange 2013 - server2012 domain name and exchange2013

I have a server 2012 with a AD setup with domain.local setup. I would like to add exchange to our new server 2012 just added. our current setup: 1 server 2012 domain setup domain.local with AD 1 server 2012 (new) would like to add exchange on. hosted emails(name.net), and website from godaddy (website name.net) I would like to setup an exchange server to get away from godaddy emails. but leave the website there for now. Should i rename my domain.local to .net? Any help from experts on what they would do would be appreciated. Thanks

performance - Why should I choose an Enterprise Storage Vendor?

I have been basically psyched out by some of the sales consultants from enterprise vendors such as EMC and NetApp. We have a new requirement to store around 30 TB of unstructured data, and around 100-200 GB of structured data (mostly MySQL). I am considering setting up couple of JBODs which are custom built running FreeNAS, OpenFiler or OpenSolaris for the unstructured Data. The total cost with redundancy and backup is coming to less than 10,000 USD for me. For the latter, I am planning to use a standard HP DL-180 G6 Server RAID 1 ( 2 SAS Drives), with an incremental backup to a backup system. The entire cost is working out less 12,000 USD. The files inside the JBOD are often accessed by a web application used by some of our mobile workforce. Presently the data size of files are less than 1 TB, but this should increase to 20-22 TB in next 2 years time. Also the number of users accessing may also go up. That means this would be accessed very often. Something like 20-25 concurrent users

All my emails to Yahoo!, Hotmail and AOL are going to Spam, though I've implemented every validation method (works for Gmail though)

I've implemented everything and checked everything (SPF, DomainKey, DKIM, reverse lookup), and only Gmail is allowing my emails to go to Inbox. Yahoo, Hotmail and AOL are all sending my messages to Spam. What am I doing wrong? Please help! Following are the headers of messages to Yahoo, Hotmail and AOL. I've changed names and domain names. The domain names I'm sending mail from are polluxapp.com and gemini.polluxapp.com. Yahoo: From Shift Licensing Tue Jan 26 21:55:14 2010 X-Apparently-To: gamerfromhell13@yahoo.com via 98.136.167.163; Tue, 26 Jan 2010 13:59:12 -0800 Return-Path: X-YahooFilteredBulk: 208.115.108.162 X-YMailISG: gPlFT1YWLDtTsHSCXAO2fxuGq5RdrsMxPffmkJFHiQyZW.2RGdDQ8OEpzWDYPS.MS_D5mvpu928sYN_86mQ2inD9zVLaVNyVVrmzIFCOHJO2gPwIG8c2 L8WajG4ZRgoTwMFHkyEsefYtRLMg8AmHKnkS0PkPscwpVHtuUD91ghsTSqs4lxEMqhqw60US0cwMn_r_DrWNEUg_sESZsYeZpJcCCPL0wd6zcfKmtYaIk idsth3gWJPJgpwWtkgPvwsJUU_cmAQ8hAQ7RVM1usEs80PzihTLDR1yKc4RJCsesaf4NUO_yN1cPsbFyiaazKikC.eiQk4Z3VU.8O5Vd8i7m

nginx and apache - multiple virtual hosts on the same IP - correct configuration

I am trying to run nginx as reverse proxy for Apache on the same machine and serve different websites from it. My question is - is it possible to add virtual hosts only to nginx and have it pass the url/hostname/path, etc to Apache automatically depending on which host is requested. OR do I need to set up a virtual host for every site (domain) in both nginx and Apache? Also, are there any potential issues with this setup? What I am planning to have in my nginx config is something like this for each domain (Apache is running on port 8080): server { listen 80; root /var/www/site1.com/; server_name site1.com; location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pas

linux - What is the best method to add a new hard drive in RHEL 5.4?

What is the best method to add a new hard drive in RHEL 5.4 linux machine? Please go through the following details of linux machine and help me to add one 250 gb hard disk to increase available disk space. [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/hda3 220G 3.7G 205G 2% / /dev/hda1 99M 12M 83M 13% /boot tmpfs 2.0G 0 2.0G 0% /dev/shm Answer Once you physically install the disk, you'll need to do the following: use 'fdisk /dev/hdb' to create a partition on the new disk (in this case the new partition would likely be /dev/hdb1). You'll need to set the partition as as primary partition. Write a file system -- for ext3 you'd use 'mkfs.ext3 /dev/hdb1' I also like to assign a disk label to make mounting simpler: e2label /dev/hdb1 /label (where /label is whatever you want, such as /backup). update /etc/fstab so that the disk is mounted on boot. LABEL=/

networking - How to work out how many IP's you've got available on a /29 network mask

This might sound like a stupid question but I would really like to know how I would work out how many IP's I've got available on this network range: 196.44.198.32/29 Can someone explain it to me, what the /29 means and how you calculate it. The amount of IP's you've got available, the one that would be use to broadcast ect. Kind regards Conrad Answer For such use you may use a pretty tool named ipcalc Address: 196.44.198.32 11000100.00101100.11000110.00100 000 Netmask: 255.255.255.248 = 29 11111111.11111111.11111111.11111 000 Wildcard: 0.0.0.7 00000000.00000000.00000000.00000 111 => Network: 196.44.198.32/29 11000100.00101100.11000110.00100 000 HostMin: 196.44.198.33 11000100.00101100.11000110.00100 001 HostMax: 196.44.198.38 11000100.00101100.11000110.00100 110 Broadcast: 196.44.198.39 11000100.00101100.11000110.00100 111 Hosts/Net: 6 Class C Also you can use this sim

backup - Primary and secondary name servers : Do they have to be hosted in same company?

I have my domain hosted and I received to two name servers to be updated with my domain registrar ns1.hostingcompany.com ns2.hostingcompany.com , I know if the primary name server goes down DNS server automatically point my domain to the secondary name server but my question is what happens my host's whole system goes down? Since ns1 and ns2 are belong to the same company would my domain resolve to anything ? or can I have ns2 in a different server with a different company? if so how do I configure the ns2 backend ? Thank you

Hosting my own DNS server for my domain, force IP resolution based off visitor's IP address

This is specifically for my domain, the users/visitors will not use my dns server at all. Is this even possible? I need to setup a dns server that I can host my domain on, that when requested will choose an IP address for the user from criteria that I set on my server. Ex: user from Canada requests A NAME record lookup on mydomain.com and it will return 10.0.0.20 and a user from the Netherlands performing the same lookup will return 10.0.0.21, ideally I would like to be able to take the requesting IP address (user) and run my own scripts/checks on it even down to accounting for the ASN of the requestee and giving a specific IP address. Is this possible with hosting my own DNS server? I don't have much experience outside of using free or third party dns services. This needs to be done at the DNS level and not through redirecting traffic using a reverse proxy.

windows - What should the order of DNS servers be for an AD Domain Controller and Why?

This is a Canonical Question about Active Directory DNS Settings. Related: Assuming an environment with multiple domain controllers (assume that they all run DNS as well): in what order should the DNS servers be listed in the network adapters for each domain controller? Should 127.0.0.1 be used as the primary DNS server for each domain controller? Does it make any difference, if so what versions are affected and how? Answer According to this link and the Windows Server 2008 R2 Best Practices Analyzer, the loopback address should be in the list, but never as the primary DNS server. In certain situations like a topology change, this could break replication and cause a server to be "on an island" as far as replication is concerned. Say that you have two servers: DC01 (10.1.1.1) and DC02 (10.1.1.2) that are both domain controllers in the same domain and both hold copies of the ADI zones for that domain. They should be configured as follows: DC01 Primary

permissions - Apache CentOS 7 403 Forbidden Anyone?

I am trying to setup a situation where I can FTP to my Linux CentOS 7 server and update the web site files from my Windows 7 system. At this point I can FTP to my user's folder using vsftp (/home/robert) and Apache seems to work for the default web site (/var/www/html). I created a virtual host for port 8080 and if I point it to /var/www/8080/public_html it works fine but as soon as I point it to /home/robert/public_html it comes back with 403 Forbidden. You don't have permission to access / on this server. The apache error log shows [Wed Mar 18 16:12:27.546621 2015] [core:error] [pid 21204] (13)Permission denied: [client 192.168.1.66:57090] AH00035: access to / denied (filesystem path '/home/robert') because search permissions are missing on a component of the path The apache conf file (/etc/httpd/conf/httpd.conf) has this entry for Virtual Host Listen 80 Listen 8080 # Virtual Hosts ServerName 192.168.1.10:8080 # DocumentRoot /var/www/8080/public_html Docum

iis 7.5 - 301 Redirect microsite to a section of a main site in IIS 7.5

I need to redirect an entire microsite (mymicrosite.com) to relevant pages on my main site (mysite.com). My main site has a custom 301 module (built into the 404 page) which checks unfound paths against a list of paths where we've moved a page. So if I simply redirect the entire of my microsite at domain level I can handle all the paths from it in the 301 module. But: I want the index page for my microsite to map to mymainsite.com/section1, instead of to the domain itself. So mymicrosite.com/product1 > Domain redirect > mymainsite.com/product1 > 404 > 301 Module > mymainsite.com/microsite-product-1 but mymicrosite.com > Domain Redirect > mymainsite.com I need mymicrosite.com > Domain Redirect > mymainsite.com/microsite-products Can this be done with URL rewrite, and if so, what would the regex look like that would rewrite a domain for all URLs with a domain/path structure but add a path if the URL to redirect contained ONLY the domain? Answer

nagios - Monitoring VMware ESXi (free) vs. vSphere

I have two hosts running the free ESXi hypervisor. However, we use Nagios for monitoring, and I've received conflicting information about how we should monitor these systems. Are my findings below accurate? ESXi with free license does not support SNMP monitoring via Nagios. True? vSphere supports SNMP monitoring via Nagios. True? Upgrade to vSphere simply requires a license change in the host. Really? I was under the impression that ESXi does not include the RHEL environment that would allow us to install the Nagios plugins, so it seems weird that a simple license change would suddenly give us root access, and allow us to monitor it. My co-worker said he was recently forced to rebuild a vSphere host from scratch instead of upgrading ESXi, so I'd like to know if that is a requirement or not. Also, if you monitor your VMware hosts with Nagios, please let me know if you have a better way of doing it. Answer I'm a VMware neophyte and I've never been able to un

bios - SuperMicro server won't boot after power cycle

We power cycled one of our SuperMicro machines and it does not boot anymore. It seems it won't even get to the BIOS loading stage and peripherals (VGA monitor, USB cable) are not detected. All indicators on the chassis itself seem fine (namely the power supply, CPU overheating and even network connection). The LEDs for the PSUs are also green. We tried removing the disks and booting from a Centos disk but no luck. To me this seems like a mobo/BIOS issue, but we are completely stuck at the moment, so any suggestions on how to find / fix the issue would come in handy.

networking - Running own DNS in intranet, how to reach outside DNS from clients

The Scenario: I am creating a Network for 8 Windows 7 Workstations running in a Student Laboratory. They are supposed to browse the internet, but not to be reached from outside their intranet. To achieve this, i run an Ubuntu 14.04 server with 2 NIC's. One NIC (em1) is connected to our department VLAN, i.e. the internet. I'm routing the Workstation-intranet to NIC em2 via IPV4 forwarding over iptables' MASQUERADE feature. Schematic: internet | departments VLAN | Ubuntu server / iptables | Switch------- | | | | | W W W W W ... After some hiccups, this works fine, see my other post for the full picture (and some of my configurations): askubuntu The Ubuntu server is the only machine with a valid IP from the VLAN and needs to use the departments VLAN's DNS servers, or it can't reach the internet. As i found out via my other post, all the clients need to use our departments DNS servers as well, as their packages run via the Ubuntu router and are masqueraded as p

monitoring - How do get the 'response time' à lá pingdom,etc?

I just received this report from pingdom: Ans was wondering how do they know the 'response time' information regarding the site. The GET request does not give that information. Answer ... was wondering how do they know the 'response time' information regarding the site. I am not sure what exactly Pingdom does, but the standard seems to be to measure the complete page load time, but without images: See for example: http://www.alertfox.com/Tools/LoadTime/ "This test measures the response time (HTML load time without images) from three monitoring stations distributed worldwide." Technically that is the same as http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.aspx

Munin fills server memory

In the last weeks, it happened several times to me that my vserver (Debian Lenny) was out of RAM (500M) and therefore wasn't able to run apache anymore. When looking at the processes with top , I saw that there were many open munin-limits and munin-cron processes that consumed most of the memory. My guess would be that sometimes Apache temporarily needs more memory, which prevents munin-cron from running. And if munin-cron isn't able to stop itself, it would fill the memory until nothing is left. I don't know whether this guess is true, but could maybe someone know what the problem is and how to prevent it? If necessary I'll remove munin, but I'd prefer to keep it running. Answer munin-cron calls munin-limits, if something prevents munin-limits from finishing you'll end up with munin-cron and munin-limits processes. As far as I can see (I don't use munin-limits), munin-limits is responsible for forwarding notifications regarding configured t

http status code 403 - Apache serves some files, others get 403

I've just setup a CentOS 5.5 install with Apache and Mapserver. While trying to do the tutorial for mapserver, I've found that Apache returns 403 forbidden when accessing any of the tutorial files, yet for any file I create and upload it serves it normally. When checked with ls -l the permissions are exactly the same, the user and group are exactly the same, and the files are in the same folder - yet I can't access a .txt from the tutorial whereas if I copy the contents into another file I can access it. Apache's error logs simply say I don't have permission to access the file, so isn't telling me anything more useful, and my searches have all told to ensure the permissions are set correctly (they look the same). It's a fresh install with my web docs residing in /var/www and find /var/www/ -name .htaccess doesn't return anything so I'm confident there aren't any .htaccess files preventing my from accessing anything. nginx can serve the file

nameserver - Virtual server problem in apache's httpd.conf

When I use the following code: ServerName subone.domain.tld DocumentRoot /var/www/subdomain/subone/ ServerName subtwo.domain.tld DocumentRoot /var/www/subdomain/subtwo/ Every query goes to /var/www/subdomain/subone. Including: domain.tld, subone.domain.tld, subtwo.domain.tld, ... When I add a "NameVirtualHost *" to the beginning of the file, everything goes to /var/www What am I doing wrong? Answer I think you are missing the port numbers maybe? e.g. NameVirtualHost *:80 and VirtualHost *:80

amazon ami - how to downgrade Perl to 5.8.8?

I have Amazon instance (Amazon Linux AMI release 2011.02.1.1 (beta),2.6.35.11-83.9.amzn1.i686) and I want to downgrade Perl Version from v5.10.1 to v5.8.8, but when compile perl 5.8.8 I got this error: asm/page.h: No such file or directory make[1]: *** [SysV.o] Error 1 make[1]: Leaving directory `/perl-5.8.8/ext/IPC/SysV' how can solve this problem ?

Tomcat configuration for single IP address with multiple domains and SSL Certificates

I am trying to configure Tomcat to serve the same content to two different domains with different SSL certificates but the same IP address. I have created two connectors however even though the domain is different I am getting the following error on startup: Caused by: java.net.BindException: Address already in use Is it even possible to allow tomcat to serve two domains, with different SSL certificates on the same IP address? Answer According to the docs here Tomcat 8.5 supports SNI Alternatively, you could add subject alternative name extensions for the additional host names you want to cover.

apache 2.2 - Blocking directory listing on a wamp server

So I have a website that runs through wamp however if you type a path in the address bar, you get an index of the directory, which I don't want I have already tried to add a .htaccess file in the directory to stop it indexing (saying IgnoreIndex * ) and went into the apache httpd.conf file and changing Include conf/extra/httpd-autoindex.conf to: # Include conf/extra/httpd-autoindex.conf however none have worked... I have tried restarting my server, restarting services, but nothing... Help?

windows - Scripts on UNC paths take very long to run

I have several scripts in UNC paths (from Windows batch files to PHP scripts). No matter how I run them (double click on explorer, my editor's run command menu or Windows command prompt) they take really long to start running (like 14 seconds). Once they get started they run normally. This doesn't happen if I run them from mapped drives. I'm using Windows XP Professional SP3 inside an Active Directory domain and files are hosted in a Windows Server box (not sure about the version, it's an HP dedicated file server with bundled OS). Why does it happen? Is there a way to speed up things while using UNC paths?

Finding out what user Apache is running as?

I want to secure a file upload directory on my server as described beautifully here, but I have one problem before I can follow these instructions. I don't know what user Apache is running as. I've found a suggestion that you can look in httpd.conf and there will be a "User" line, but there is no such line in my httpd.conf file, so I guess Apache is running as the default user. I can't find out what that is, though. So, my question is (are): how do I find out what the default user is do I need to change the default user if the answer is yes and I change the default user by editing httpd.conf, is it likely to screw anything up? Thanks!

ftp - Permanent mount and bind on CentOS?

I have set up a CentOS 6.2 VirtualBox along with FTP and Apache root of /var/www/html/. By default, the user ftp account is set to /home/user_name/. However, I'm guessing it's common practice to give that user access to the /var/www/html/. I was able to mount and bind the directory to the user directory as such: mkdir /home/ftp_user/html/ mount --bind /var/www/html/ /home/ftp_user/html/ But as soon as I shut down my VirtualBox, this bind disappears. Is there any way to make this permanent? Answer Sure - put the bind mount in your /etc/fstab . Device is your source directory, mount point is your mount point, and the type is 'bind'. Something like this: /var/www/html/ /home/ftp_user/html/ bind rw,bind 0 0

Apache Adding DocumentRoot to URL when www not used - could be rewrite issue

When I visit www.mysite.com all is well. When I visit mysite.com (without the www) I get redirected to http://www.mysite.com/home/admin/domains/mysite.com/public_html which gives a 404 error. /home/admin/domains/mysite.com/public_html is my DocumentRoot I believe this problem occurs without rewriting happening... but I could be wrong. I think it my be something with my Apache config. Hopefully someone will spot a common mistake. My VirtualHost setting was generated by DirectAdmin and is as follows: ServerName www.mysite.com ServerAlias www.mysite.com mysite.com ServerAdmin webmaster@mysite.com DocumentRoot /home/admin/domains/mysite.com/public_html ScriptAlias /cgi-bin/ /home/admin/domains/mysite.com/public_html/cgi-bin/ UseCanonicalName OFF SuexecUserGroup admin admin CustomLog /var/log/httpd/domains/mysite.com.bytes bytes CustomLog /var/log/httpd/domains/mysite.com.log combined ErrorLog /var/log/httpd/domains/mysite.com.error

eAccelerator Causes PHP Include to Fail in Wordpress

SERVER: Linux CENTOS 6 PLESK 10.4.4 I have been installing Wordpress on many subdomains on our dedicated server. All of them run CRON jobs every 10 minutes. Long story short, the time to load first byte was getting to over 10 seconds. I did some research and found that eAccelerator helps with speed issues for PHP-intensive websites and another website that gives some instruction on how to do this. http://imanpage.com/code/how-install-yum-zend-optimizer-eaccelerator-and-apc After installing the Atomic repo and doing a YUM update I installed eAccelerator like this: yum install php-eaccelerator.x86_64 I checked the PHP version after the install and found this: PHP 5.3.14 (cli) (built: Jun 14 2012 16:34:56) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with eAccelerator v0.9.6-svn358-dev, Copyright (c) 2004-2007 eAccelerator, by eAccelerator with the ionCube PHP Loader v4.0.10, Copyright (c) 2002-2011, by ionCube Ltd. So I was like

amazon web services - Same SSL for multiple domains

I have a Godaddy domain that points to aws load balancer with a SSL cert. Now I want to add another domain B to point to the same load balancer. Can I use the same SSL cert or is it associated with the domain name ? Do I have to buy a UCC ssl cert ? What is it anyway ? How should I approach this ? Thanks Answer SSL certificates are tied to a particular domain, so no to #1. 2 works nicely as an option. A UCC/SAN certificate is just an SSL certificate with multiple valid domain names in it (called Subject Alternative Names). UCC/SAN is what I use for our Amazon AWS load balancer and it works nicely. The only downside of such a certificate is that it shows the other domain names it's used for. If you have paranoid/picky clients this may be problematic.

Automated Load/Stress testing via a continuous integration server

My company currently has stress tests that are run manually through JMeter. We also use TeamCity for automation of JUnit testing. It's become clear that we need to automate our stress testing as well to provide more generalized testing of our entire web application. I have been looking for a solution where I could use JMeter within TeamCity but I have no yet found anything. Has anyone done this successfully? Anyone have other recommendations that I should consider? Thanks, Casey Update May 15th After some more research I have found some interesting scripts, particularly jmeter-ec2 . The ec2 API is a little criptic, but I could see the following working from within TeamCity: Create ec2 AMI with latest version of our software on it. Launch AMI as a virtual instance Wait for server to come online Run jmeter-ec2 against server with jmeter test set Use jmeter-ec2 to retrieve test results Parse test results and report back to TeamCity This seems to reach the desired result but it also se

.htaccess - rewrite rule does not rewrite url as expected

I have a problem with a CMS website, that normally generates readable urls. Sometimes it happens that navigation links are shown as www.domain.com/22 , which results in an error, instead of www.domain.com/contact . I have not found a solution for this yet, but the page is working if the url is www.domain.com/index.php?id=22 . Therefore, I'm trying to rewrite www.domain.com/22 to www.domain.com/index.php?id=22 and I have used this rewrite rule: RewriteRule ^([1-9][0-9]*)$ index.php?id=$1 [NC] I tested it using http://htaccess.madewithlove.be and here it shows the correct result, but on the website no rewrite is happening. Begin: Rewrite stuff RewriteEngine On RewriteRule ^(typo3|t3lib|tslib|fileadmin|typo3conf|typo3temp|uploads|showpic.php|favicon.ico)/ - [L] RewriteRule ^typo3$ typo3/index_re.php [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteRule .* index.php [L] Options +FollowSymLinks RewriteCond %{HTTP_H

hp - Proliant server will not accept new hard disks in RAID 1+0?

I have a HP ProLiant DL380 G5, I have two logical drives configured with RAID. I have one logical drive RAID 1+0 with two 72 gb 10k sas 1 port spare no 376597-001. I had one hard disk fail and ordered a replacement. The configuration utility showed error and would not rebuild the RAID. I presumed a hard disk fault and ordered a replacement again. In the mean time I put the original failed disk back in the server and this started rebuilding. Currently shows ok status however in the log I can see hardware errors. The new disk has come and I again have the same problem of not accepting the hard disk. I have updated the P400 controller with the latest firmware 7.24 , but still no luck. The only difference I can see is the original drive has firmware 0103 (same as the RAID drive) and the new one has HPD2. Any advice would be appreciated. Thanks in advance Logs from server ctrl all show config Smart Array P400 in Slot 1 (sn: PAFGK0P9VWO0UQ) array A (SAS, Unused Space: 0 MB)