Skip to main content

Posts

Showing posts from January, 2015

Redirect NGINX HTTPS domain to new HTTPS domain

After spending hours using duckduckgo, this was the best answer I arrived at and it's still not working to redirect a currently HTTPS domain to a new HTTPS domain. server { listen 80; listen 443 ssl; server_name www.olddomain.com olddomain.com; rewrite 301 https://newdomain.com$request_uri; } Browser gives Insecure Connection error. I have also tried things like server { listen 443 ssl; server_name olddomain.com; ssl on; ssl_certificate /etc/ssl/certs/OLD.crt; ssl_certificate_key /etc/ssl/private/OLD.key; #enables all versions of TLS, but not SSLv2 or 3 which are weak and now deprecated. ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "ALLLLLLTTHHISSSS"; ssl_prefer_server_ciphers on; rewrite 301 https://newdomain.com$request_uri; } Although this option stops giving an error the rewrite is not working and it goes to a "Welcome to NGINX Page."

What is the canonical name for domain names with extra parts?

I am confused about domain names (I think) I call these things, i.e. names you can buy, 'domain names' bbc.co.uk google.com I call these things, i.e. extensions of names 'host names' www.bbc.co.uk mail.yahoo.com arts.mit.edu hello.there.example.com Is this naming scheme correct? Are there official definitions of these? In particular, what are each of the texts between the dots called (i.e. the name for "www", "bbc", "edu", "example")?

nginx - HHVM randomly stops running

The Background Recently I've changed php5-fpm in favor of hhvm , that is really what they said, a "holly performance grial" I've installed hhvm remove php5-fpm (is really a fallback needed??) following this instructions : https://bjornjohansen.no/hhvm-with-fallback-to-php . I have multiple websites(domains) inside this VPS, and most of them are wordpress + nginx + W3TC + Ubuntu 12.0.4p + MariaDB 10ish The Main Problem Since the change, randomly hhvm suddenly stops running . Don't really know why , so I decide to follow the last step of the tutorial, installing ps-watcher and detecting if the service is not running and checking every 5 seconds to re-start it again. The Configuration hhvm.conf: location ~ \.(hh|php)$ { proxy_intercept_errors on; #error_page 502=@fallback; try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_keep_conn on; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_siz

HP D2700 enclosure and SSDs. Will any SSD work?

I've got an HP D2700 enclosure that I'm looking to shove some 2.5" SSD drives in. Looking at the prices of HP's SSD drives vs something like an Intel 710 and even something less 'enterprisey', there's quite a difference in price. I know the HP SSD's will obviously work, but I've heard rumours that buying an Intel/Crucial/whatever SATA SSD, bunging it in an HP 2.5" caddy and putting it in a D2700 won't work. Is there an enclosure / disk compatibility issue I should watch out for here? On the one hand, they're all just SATA devices, so the enclosure should treat them all the same. On the other, I'm not particularly well-versed in the various different SSD flavours to know whether there's a good technical reason why one type of drive would work, yet another one wouldn't. I can also imagine that HP are annoying enough to do firmware checks on any disks and have the controller reject those it doesn't like. For background, the

ubuntu - Unable to limit Apache server-status page to localhost

I am using Apache 2.4.18 on Ubuntu. I want to allow reading server status only from localhost. In /etc/apache2/mods-enabled/status.conf I have: SetHandler server-status Require ip 127.0.0.1 I have read https://httpd.apache.org/docs/2.4/howto/access.html and and from I belive the above configuration should be working. I have restarted Apache to ensure that new configuration is active. However the status page is still open for reading from anywhere. In /etc/apache2/sites-enabled/mysite.conf I have: DocumentRoot /var/www Require all granted What is wrong with my configuration? Answer From what i can see, the virtual host config file seems to take precedence over the mod_status config file. Actually you grant all to / within mysite.conf : Require all granted This results in that everyone can access /server-status . You would have to manage permissions to /server-status in the virtual host config file itself /etc/apache2/mods-enabl

Any performance gain from multiple nvme drives in RAID versus a single nvme drive?

We've got a cluster of scylladb hosts (it's a cassandra-type database) running on i3 instances in amazon with the /var/lib/scylla/ folder mounted on a single nvme drive. I'm wondering whether there is any i/o performance gain to be expected by replacing this single drive with a two- (or multiple-) nvme drives that are configured as a RAID 0. In order words would striping give us a noticeable performance boost on this type of drive?

Ubuntu Server Web exposed folder recommended permissions

I am setting up my first server from scratch and as I am completely novice at this I am not sure if i am doing this right. I have my web exposed folder at /var/www and as I need to it accessible to everyone and writable by group and owner I set its permission to 774. Is this how things are supposed to be set up or am I doing something terribly wrong? Answer The general rule is to give the least possible permissions/privileges for users. Folders permissions should be generally either 755 (does not provide w for group) or 775 (provide w for group). In your case, 774 might not be enough for web server process to access the folder given that it is not the owner and is not a member in its group.

linux - Why does the CentOS 5.8 install give an error about CD-ROM drive when using HP ILO?

CentOS 5.8 I'm trying to install CentOS 5.8 on the following system with the following specs: HP DL360e Gen8 HP Dynamic Smart Array 8320i RAID Controller ILO 4 v1.05 I'm connecting the ISO to the server via a virtual disk using HP ILO4. It initially seems to boot fine... I see lines like: Loading vmlinuz........ Loading initrd.img................... Followed by various bios messages.... Eventually I see the anaconda installer start and get the following message: Loading SCSI driver Loading usb-storage driver... Followed by: Loading SCSI driver Loading ahci driver... Then finally, I get: CD Not Found CentOS CD was not found in any of your CDROM drives. Please insert the CentOS CD and press OK to retry. If I select OK, it gives me the same error. From what I can dig up, ILO essentially mounts the CD as a USB CD Drive to the system. I'm wondering if for some reason a driver for facilitating this isn't available (although I'm still confused on how I could get this fa

linux - php5-fpm doesn't create sockets for pools

I'm going off of section 3.1 of http://www.howtoforge.com/php-fpm-nginx-security-in-shared-hosting-environments-debian-ubuntu . I have created a directory in /var/run which www-data:www-data has rwxrwx permissions on. My new pool config file consists of: http://pastebin.com/nmrJkMkz However, upon restart of php5-fpm, /var/run/php5-fpm/domain.com.sock is not created as specified in the conffile. Nothing is in the error log of PHP5-FPM. Any ideas as to why this may be occurring?

What is the difference between a hostname and a fully qualified domain name?

I am new to the world of setting up servers and am baffled by the term hostname and fully qualified domain name (FQDN). For example, if I want to set up a server that hosts files on the local network i.e. a file server, what would I use a hostname such as myfileserver or something else? What if I wanted to set up a web server, mail server, etc. that external users could access? Answer Your hostname is the name of your computer. Your fully qualified domain name is your hostname plus the domain your company uses often ending in .local . So if the name of your computer is bob , and your company's domain is contoso.local , your computer's fully qualified domain name (FQDN) is bob.contoso.local : Hostname : bob Domain : contoso.com FQDN : bob.contoso.com In the case of a domain like contoso.local I did not use an "external" internet domain name. This name doesn't have to be the only way that you address the server. If you make it available by its IP ad

linux - Redhat doesn't set my desired hostname on reboot

I have a redhat (EL5) server that I need to change the hostname on. I'm trying to put it back into a known state to help with server provisioning activities. As part of changing the hostname, I'm updating /etc/sysconfig/network and /etc/hosts. I also have an explicit call to hostname. My desired state is that the server thinks its hostname is "localhost". And a call to "hostname" returns "localhost". The problem I'm having is that when I reboot, the hostname is reverted to "localhost.companyname.com" which is not what I want. How do I ensure that the hostname is set up as just "localhost" when I reboot? My /etc/sysconfig/network file contains: NETWORKING=yes HOSTNAME=localhost GATEWAY=123.123.123.123 #I do have a proper IP address here My /etc/hosts file contains: 127.0.0.1 localhost.localdomain localhost 172.21.1.1 localhost.companyname.com localhost

linux - Meaning of the buffers/cache line in the output of free

Why does my server show total used free shared buffers cached Mem: 12286456 11715372 571084 0 81912 6545228 -/+ buffers/cache: 5088232 7198224 Swap: 24571408 54528 24516880 I have no idea on calculating the memory in linux. I think it says that 5088232 is used where as 7198224 is free, meaning it is actually consuming 5GB of RAM? Answer Meaning of the values The first line means: total : Your total (physical) RAM (excluding a small bit that the kernel permanently reserves for itself at startup); that's why it shows ca. 11.7 GiB , and not 12 GiB, which you probably have. used : memory in use by the OS. free : memory not in use. total = used + free shared / buffers / cached : This shows memory usage for specific purposes, these values are included in the value for used . The second line gives first line values adjusted. It gives the original value for used minus the sum buffers+cached and the original value

raid - HP Smart Array P410 Stuck in Ready for Recovery 00.0%

We are using smart array disk controller P410 by HP on our supermicro server. Sadly of the HDDs in a RAID10 Array was damaged and we were forced to change that specific hard disk. After 3 days and rebooting the server 2 times we are still seeing the very first warning message after changing the HDD which is saying: Warning Status Messages ((Ready for Recovery) Logical Drive 1 (931.5 GB, RAID 1+0)) 776 (Ready for Recovery) Logical Drive 1 (931.5 GB, RAID 1+0) is queued for rebuilding. We are worried about the issue and we decided to check the firmware update and hopefully that is up to dated and there is no update available for that. It is noticeable that we have changed the RAID CARD with a new one with the same model as well. our raid device information: Firmware Version 6.40 Number of Ports 2 (Internal only) Number of Arrays 3 Smart Array P410 in Slot 1 Bus Interface: PCI Slot: 1 Serial Number: PACCR9SXRCQH Cache Serial Number: PAAVPID12031NLH RAID 6 (ADG) Status: Disabled Cont

VMWare ESXi free?

Much is made of the fact that VMWare's ESXi hypervisor is "free" As best I can tell, you can install the hypervisor on a host for "free". Because ESXi does not have a built in management console, you need a program, of some sort, to connect to the ESXi hosts to "manage" them. By "manage" I mean, start, stop, install, reboot and backup vms. If you install the free ESXi on a host and connect to it via a web browser, you are prompted to download vSphere to manage the host. OK, but vSphere is, as best I can tell, not free. When you install it you are continuously reminded that you have only 60 days to evaluate vSphere. My question is this: Is there a completely free management tool for ESXi hosts that enables one to: Create VMs Modify VMs settings (memory etc.) Power VMs on and off Backup the VM (via any means) Resore a VM from a backup Failing that, without licensing something from VMWare, is there any tool that will let you manage your h

Etiquette of Troubleshooting Problems In The Workspaces Of Others

A visibly upset colleague approached our technical support team this morning. She noted a member of our team had changed her workspace: Her monitor was turned off (she expected standby mode). Her chair settings were changed. She had been logged out, with one of our team member's names in the Windows log-in box. The first issue seems to have led to confusion and frustration as she wondered why she did not see her PC resuming from standby node. The second issue seemed to have been a trigger for a need for respect and comfort; apparently it takes her some time to find just the right setting to feel comfortable. The third issue seemed to stem from her desire to wrap up work prior to a three-month leave in 1-2 days. It can take 1-2 hours for our corporate virus scanner on her older PC to complete a weekly scan, which seems to be triggered on log-in. This reduces her productivity. After she felt heard about why our team might have needed to do these things, she returned to a pleasant sta

email - How is a sender verified with Gmail's 'Send Mail As' feature?

I have an email account info@example.com with mail.live.com . I also have a Gmail account. I have set up the 'Send mail as' feature in Gmail to send mail as info@example.com, and this works correctly. My question is, if I send an email from Gmail 'as' info@example.com, how does the recipient's server verify that Gmail was authorized to send mail for example.com? I have some knowledge of SPF records, and I know that the SPF record for example.com says that only messages originating from hotmail.com servers are valid. The message that Gmail sends out has the @gmail.com address in the Return-Path and Sender fields, and so the SPF check is done against gmail.com and not example.com. I have tested this with the test service at verifier.port25.com and it passes. SPF check: pass DomainKeys check: neutral DKIM check: pass Sender-ID check: pass SpamAssassin check: ham

storage - Linux - real-world hardware RAID controller tuning (scsi and cciss)

Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array ). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator , recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has h

malware - Rootkit Revealer is failing to run, why?

On a user's laptop (Windows 7 x64), terrible performance led me to suspect a rootkit after ruling almost everything else out. I checked boot entries with Autoruns and ran a full scan with Malwarebytes, and both came up more or less clean. I downloaded RKR, unzipped, ran as admin, but it would not open. I opened the task manager to check and tried reopening the program. Sometimes the process wouldn't even show. Sometimes it would show for ~10s with a fixed amount of memory listed, and then die. Once, I got to the Sysinternals licence agreement, but it died after that. Tried renaming the EXE, no dice. Tried safe mode, no dice. One thing I haven't done is check the event logs, which I should probably do. Besides that, what mechanism could potentially cause RKR to fail to start? Or is my system likely compromised, requiring a nuke from orbit? Answer Rootkit Revealer does not support and does not run on 64-bit Operating Systems. The fact that Rootkit Reveal

How is kernel oom score calculated?

Looked on The Google, and couldn't find anything that explained how the score in proc/ /oom_score is calculated. Why use this score instead of just using the total memory used? Answer See Goldwyn Rodrigues's 2009 article for the implementation at that time, Jonathan Corbet's 2010 article for what I believe is the current behavior, and Jonathan Corbet's 2013 article for ideas about future changes. From the 2010 article: In David's patch set, the old badness() heuristics are almost entirely gone. Instead, the calculation turns into a simple question of what percentage of the available memory is being used by the process. If the system as a whole is short of memory, then "available memory" is the sum of all RAM and swap space available to the system. If, instead, the OOM situation is caused by exhausting the memory allowed to a given cpuset/control group, then "available memory" is the total amount allocated to that

Does a client cache the IP of a CNAME DNS request or the other domain?

I am contemplating using two different DNS hosts and am curious as to how a client system would cache the CNAME record. The reason I am going this route is that a primary domain that I would like to use is already hosted in one DNS service but the application is hosted in another service that has its own DNS handling that is integrated with the services. Specifically what I am looking to understand is, if friendly.abc.com is a CNAME record that points to long-ugly-url.hosted-service.com which is itself hosted (DNS and application) on another service with an IP of 1.2.3.4 , will the clients accessing friendly.abc.com be caching long-ugly-url.hosted-service.com or 1.2.3.4 ? The reason I ask is that if it is the former, then the long-ugly-url.hosted-service.com A record can have a short TTL and it can be changed quickly while the friendly.abc.com can be set at a higher TTL but still have changes "propagated" quickly. If it is the latter, then both would need to have short

apache 2.2 - 404 not found error for virtual host

In my /etc/apache2/sites-enabled, i have a file site2.com.conf, which defines a virtual host as follows : ServerAdmin hostmaster@wharfage ServerName site2.com ServerAlias www.site2.com site2.com DirectoryIndex index.html index.htm index.php DocumentRoot /var/www LogLevel debug ErrorLog /var/log/apache2/site2_error.log CustomLog /var/log/apache2/site2_access.log combined ServerSignature Off Options -Indexes Alias /favicon.ico /srv/site2/static/favicon.ico Alias /static /srv/site2/static # Alias /media /usr/local/lib/python2.5/site-packages/django/contrib/admin/media Alias /admin/media /var/lib/python-support/python2.5/django/contrib/admin/media WSGIScriptAlias / /srv/site2/wsgi/django.wsgi WSGIDaemonProcess site2 user=samj group=samj processes=1 threads=10 WSGIProcessGroup site2 I do the following to enable the site : 1) In /etc/apache2/sites-enabled, i run the command a2ensite site2.com.conf 2) I then get a message site successfully enabled, and

ssh - Keeping a linux process running after I logout

I'm connecting to a Linux machine through SSH, and I'm trying to run a heavy bash script that makes filesystem operations. It's expected to keep running for hours, but I cannot leave the SSH session open because of internet connections issues I have. I doubt that running the script with the background operator, the ampersand ( & ), will do the trick, because I tried it and later found that process was not completed. How can I logout and keep the process running? Answer The best method is to start the process in a terminal multiplexer. Alternatively you can make the process not receive the HUP signal. A terminal multiplexer provides "virtual" terminals which run independent from the "real" terminal (actually all terminals today are "virtual" but that is another topic for another day). The virtual terminal will keep running even if your real terminal is closed with your ssh session. All processes started from the virtual termin

domain name system - Registering a co.za using private nameservers

I'm having some difficulty registering a co.za domain using my own name servers. I'm new to this so please excuse and newbie mistakes and questions. I'm using BIND 9.7.1-P2 and have followed all the tutorials I can find. But when I try register the co.za domain I get the following: Provided Nameserver information Primary Server : ns1.maximadns.co.za @ 41.185.17.58 Secondary 1 : ns2.maximadns.co.za @ 41.185.17.59 Domain "maximadns.co.za", SOA Ref (), Orig "" Pre-existing Nameservers for "maximadns.co.za":- Syntax/Cross-Checking provided info for Nameserver at 6a: ns1.maximadns.co.za @ 41.185.17.58 IPv4: 41.185.17.58 ==> [WARN: No PTR records!] FQDN: ns1.maximadns.co.za ==> [WARN: No A records!] Syntax/Cross-Checking provided info for Nameserver at 6e: ns2.maximadns.co.za @ 41.185.17.59 IPv4: 41.185.17.59 ==> [WARN: No PTR records!] FQDN: ns2.maximadns.co.za ==> [WARN: No A records!] ! ! The message "No PTR records?" indicat

reverse proxy - Forward ssh connections to docker container by hostname

I have gotten into a very specific situation and although there are other ways to do this, I've kinda gotten obsessed with this and would like to find out a way to do it exactly like this: Background Say I have a server running several services tucked away in isolated docker containers. Since most of those services are http, I'm using an nginx proxy to expose specific subdomains to each service. For example, a node server is running on a docker container with its port 80 bound to 127.0.0.1:8000 on the host. I'll create a vhost in nginx that proxies all requests to myapp.mydomain.com to http://127.0.0.1:8000 . That way, the docker container cannot be accessed from outside, except through myapp.mydomain.com . Now I want to start a gogs docker container in such a way that gogs.mydomain.com points to the gogs container. So I start this gogs container with port 8000 bound to 127.0.0.1:8001 on the host. And an nginx site proxying requests to gogs.mydomain.com to http://1

linux - Web Server Running Low in Memory

I have an EC2 small instance that has 2 gigs of memory, running Fedora Linux. Typically, I get about 275 page views per day on average, and I have a monitoring agent on the box. Some of the stats are little worrying in terms of free memory. Last week, at it's lowest point we had only 30 mb free of memory, this morning it appears to have increased to about 150 mb. Hyperic is our monitoring agent which runs Java. A top shows that its memory usages is only about 3.4%. When I add up all of the httpd processes I get about 15-20% memory usage with mysql using about 1%. Top doesn't reveal where the rest of the memory is going. What could I do to find out whats causing the high memory consumption? Could it be the 275 hits a day? About 95% of our code is PHP and HTML. MySQL is being used lightly from an application called OpenVBX which is only used internally. If it's apache, an upgrade in memory should solve our issue right? Any advice would be a huge help, thanks! UPDATE: free sho

Cisco static NAT not working on LAN side

I have a web server in my private network that has the ip address 192.168.1.134. I need to allow users to access this web server from both the internet and the private network. The public ip address is 85.185.236.12. I setup static nat (192.168.1.136 => 85.185.236.12) on the wan interface. Now, when we access it from the internet everything works perfectly, but when we try to access it from the LAN we can't access the webserver. I use cisco 1841 router and i think nat not working when i try to access it. How can we access the web server from the LAN? Thanks.

window xp network service don't start

In a windows XP pc, joined to a domain, I have a service (SCardSvr) that use to run as "NT AUTHORITY\NetworkService". Accidentally I changed the logon to local system account and the service didn't work properly. So I'd like to set the logon back to "NT AUTHORITY\NetworkService", and I did it leaving the password blank. Too bad if I start the service (I'm a local admin) the service don't run and gives me this error: error 5: access denied I also set the service to run automatically and I restarted the pc, but nothing changes. Any idea? I need the service to run under "NT AUTHORITY\NetworkService" credential...

Custom nameserver domain with CNAME

We have quite a couple of domains where the rightful owner is us, but the nameserver is managed by a third party (subcontractor), who doesn't allow us to change the zone files. Thus, we are moving to our new nameservers, which we can manage on our own, namely DigitalOcean's free DNS service. Is it possible and if yes, what disadvantages would it bring if instead of requesting the nameserver change at the registrar for DO's nameservers, I would request a change for ns1.example.com , then I'd create a CNAME record that ns1.example.com points to ns1.digitalocean.com ? Would that work? In that case, if we ever have to move our DNS service from DigitalOcean to some other service, the registrar wouldn't need to change a hundred domains at once with all the administrative hassle, we could simply modify the abovementioned CNAME record.

networking - can telnet to a service, but not access service ports directly

We're running a variety of services on our cloud provider. Everything normally works fine, but occaisionally we end up with issues connecting to 1 host (which has our repos on it). We haven't been able to find a solution to the connectivity problem, so we completely rebuilt the host at a different cloud provider. Things had been running fine, but the same connectivity issue is starting again. I'll try to summarize clearly: The host that is having connectivity issues is running Gitlab. We also ssh into that host a fair amount. When we run into connectivity issues, we cannot access ssh, git, https etc. Pinging the host works fine. I can telnet to port 22, and get a response: Connected to xyz. Escape character is '^]'. SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.1 I can access any port on the host via Telnet, and I get back a response immediately. If I try to connect to the same host via ssh, I get: ssh -v -v me@xyz OpenSSH_7.9p1, LibreSSL 2.7.3 debug1: Reading confi

redhat - What are these zero-length files created by Apache in the tmp directory?

Any ideas on why apache (httpd) creates these files in /tmp? I'm on Redhat 5.5 and Apache 2.2, mpm-prefork. -rw-------. 1 apache apache 0 Aug 14 12:46 filec1puD5 -rw-------. 1 apache apache 0 Aug 14 12:46 fileKJqaih -rw-------. 1 apache apache 0 Aug 14 12:46 fileB7j9Ws -rw-------. 1 apache apache 0 Aug 14 12:46 file1o7MCE -rw-------. 1 apache apache 0 Aug 14 12:46 filefqAvjQ -rw-------. 1 apache apache 0 Aug 14 12:46 filexjpv01 Sometimes, I see dozens of these, and I always delete them, but haven't found anything on why or how these files are generated in the first place. Error logs look clean, albeit, they're set to Error. Update : Application is Drupal 7, running on PHP 5.3.2. Answer /tmp is PHP's default folder for session data. You can change this by editing the "session_save_path" in your php.ini file. The being said, various scripts could write various session data here. There are cases wher

linux - Magento's cron.php: Persistent or not? Why putting it into cron?

I have a question regarding Magento . Apparently, to perform scheduled tasks within Magento, it needs to run a script called cron.php . Originally the script was triggered by an on-server crontab using the line wget -O /dev/null http://www.example.com/cron12345.php . Unfortunately, due to some problems, we need to limit the life of the PHP FPM child processes to 300 seconds... and that murdered the PHP process running cron.php . I tried running cron.php from CLI using the command php -c /etc/php5/fpm/php.ini cron.php and it seemed to work... but there were no output and the script just keeps running... So my questions: Is cron.php a one-shot script, or does it run until completion and needs to be invoked again? If it needs to be regularly invoked (via cron), can I just add a crontab like the following: */15 * * * * cd /var/www/website && php -c /etc/php5/fpm/php.ini cron.php Thank you for your kind assistance. Answer It should finish, it can take some time esp

apache 2.2 - MediaWiki Apache2 RewriteEngine Not Working

I am working from Manual:Short URL/Apache guide to set up a wiki with a shortened url on a Debian server running Apache2. I want the directory /var/www/currienet/w/ to be accessible on currienet/wiki/ (local network address) I have the following in httpd.conf file: ServerName http://currienet # !!! Be sure to point DocumentRoot to 'public'! DocumentRoot /var/www/currienet/root/ Allow from all # Alias /wiki '/var/www/currienet/w' LogLevel debug # Enable the rewrite engine RewriteEngine On # Short url for wiki pages RewriteRule ^/?wiki(/.*)?$ /var/www/currienet/w/index.php [L] And the following settings in the LocalSettings.php $wgScriptPath = "/w"; $wgArticlePath = "/wiki/$1"; When I try and access currienet/wiki I get the main page displaying, but none of the images, stylesheets etc are loaded and I get the following in the Apache error log (ip's bloc

vmware esxi - Openfiler iSCSI performance

Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in

php - WordPress clogs up Apache

OK, I hate to be the helpless Apache noob, but I am feeling stumped here. All of a sudden last night, our WordPress site went down. I rebooted it and watched for a couple of minutes and it seemed all right, so I left it alone. Then I wake up and find it's down again. After a little investigation, I've discovered that, despite only getting 20 or so requests per minute at the time, Apache keeps forking a new instance for just about every request until it hits MaxClients, and then the instances just sit there doing nothing. Literally 0.1% CPU utilization for the whole system at that point. If I log into MySQL and look at the process list, I can see a corresponding database connection for each httpd, so it looks like the scripts are never ending. But if I request a static file or even a simple "Hello world" PHP file before it reaches MaxClients, that request will go through fine. I'm really at a loss as to even what to look at, because nobody else here has the technic

networking - Secondary IPs on DMZ machines not working

I am moving some virtual machines from my DMZ in a bladecenter over to my new cisco UCS. We are using Hyper-V, and are running Server 2012 on the old bladecenter, and Server 2012r2 on the new UCS. We have a Cisco ASA 5515 firewall, asa version 9.1(4), that is our gateway for both LAN and DMZ traffic. In the old bladecenter, we have a virtual machine that exists in the DMZ, with one network interface configured. On this network interface, it has a primary IP, let's say 172.10.1.10, and some additional IPs configured, 172.10.1.11, 172.10.1.12. All of these work, and route normally, no issues. We migrated a machine from the bladecenter to the UCS, and have trouble now with it's secondary IPs on the DMZ. So this machine in the UCS exists in the DMZ, has one network interface configured with a primary IP, let's say 172.10.1.15, and secondary IPs 172.10.1.16, 172.10.1.17. I can ping the primary IP (.15) from anywhere on the network. However I cannot ping (or connect in any way

SAS vs Near-line SAS vs SATA

I'm unsure about the differences in these storage interfaces. My Dell servers all have SAS RAID controllers in them and they seem to be cross-compatible to an extent. The Ultra-320 SCSI RAID controllers in my old servers were simple enough: One type of interface (SCA) with special drives with special controllers, humming at 10-15K RPM. But these SAS/SATA drives seem like the drives I have in my desktop, only more expensive. Also my old SCSI controllers have their own battery backup and DDR buffer - neither of these things are present on the SAS controllers. What's up with that? "Enterprise" SATA drives are compatible with my SAS RAID controller, but I'd like to know what advantage SAS drives have over SATA drives as they seem to have similar specs (but one is a lot cheaper). Also, how do SSDs fit into this? I remember when RAID controllers required HDDs to spin at the same rate (as if the controller card supplanted the controller in the drive) - so how does that w

linux - Freeing up memory (RAM) on Ubuntu 8.04 Server

I run an Ubuntu 8.04 on a Slicehost virtual server with some lightweight server apps - apache22, svnserve, mysql, and proftpd. The only serious service limitation is RAM - 256MB is what I'm paying for. I noticed that if I let the system run for a few days/weeks, the amount of free RAM slowly declines, and paging file is being used soon after. For example, upon rebooting I may have 60% of RAM free, next day it may be at 55%, etc. total used free shared buffers cached Mem: 256 114 141 0 3 50 -/+ buffers/cache: 61 194 Swap: 511 0 511 How would I prevent the amount of available memory to decline? Edit : Here's my ps -aux listing the top memory consumers. I left out all the system processes. I can see that apache and mysql top the memory usage. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1369 0.0 0.3 16844