Skip to main content

Posts

Showing posts from September, 2015

What is the current state (2016) of SSDs in RAID?

There are plenty of resources available online that discuss using SSD drives in RAID configurations - however these mostly date back a few years, and the SSD ecosystem is very fast-moving - right as we're expecting Intel's "Optane" product release later this year which will change everything... again. I'll preface my question by affirming there is a qualitative difference between consumer-grade SSDs (e.g. Intel 535) and datacenter-grade SSDs (e.g. Intel DC S3700). My primary concern relates to TRIM support in RAID scenarios. To my understanding, despite it being over 6 years since SSDs were introduced in consumer-grade computers and 4 years since NVMe was commercially available - modern-day RAID controllers still do not support issuing TRIM commands to attached SSDs - with the exception of Intel's RAID controllers in RAID-0 mode. I'm surprised that TRIM support is not present in RAID-1 mode, given the way drives mirror each other, it seems straightforwa

windows - Server 2008 DNS/Hostname Lookups Giving Out Wrong NIC

So I have a Server 2008 machine which also acts as a PDC, and provides DNS to all other machines. Now, the server has 2 NICs, one on a 172.16.0.0/24 LAN, and one on a 192.168.47.0/24 LAN. Every other machine on the network has one NIC, and belongs to one network or the other. DHCP is working fine, but hostname/DNS lookups using the server name of the PDC sometimes resolves to the wrong NIC. Now, DNS does have hostname entries for the pdc for both addresses, but I can’t have a computer not on that network getting a resolved address it can’t reach. Ex: Computer pdc1 has addresses 192.168.47.1 and 172.16.0.1. Client client1 has a NIC physically on the 192.168.47 network. For that reason, DHCP works fine, it gets an address. However, when it looks up the address for pdc1, it gets 172.16.0.1, which isnt reachable and causes lots of problems. My question is, what is the standard way to prevent this situation? I know I’m not the only one with a PDC on separate NICs, where the client comp

Improving email deliverability: Implementing DKIM and DMARC

I have a messaging system on my app where users can send messages directly to other users straight from my domain (not going through Mailchimp's Mandrill templates or Google Apps). I also have cron jobs that sends users' statistics to about 5,300 users every week. Again, the script sends messages straight from my domain. Most e-mails are going to users' spam box, which I need to fix as soon as possible. I recently found out an app that tests e-mail deliverability and gives scores based on how well configured your email server is (among other things). This is the URL https://www.mail-tester.com . I was able to fix several things and my score went up from -0.2/10 to 7.7/10. However, although the tester says my e-mail is "good stuff", I know hundreds of emails are either not being delivered (returned because sender is not trusted) or going straight into the spam box. The last thing I need to fix to have an almost perfect score is to add a DKIM signature to the emails

linux - Clonezilla partitions restoring

I have to clone a linux drive to a smaller destination disk. By default Clonezilla will not let me do this. This is how things look like now: Source system: /dev/sda1 72G 10G 58G 15% / udev 7.9G 4.0K 7.9G 1% /dev tmpfs 3.2G 332K 3.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.9G 0 7.9G 0% /run/shm Destination system: /dev/sda3 912G 49G 817G 6% / udev 7.9G 4.0K 7.9G 1% /dev tmpfs 3.2G 332K 3.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.9G 0 7.9G 0% /run/shm /dev/sda2 90M 24M 61M 29% /boot /dev/sda6 1.9G 35M 1.8G 2% /tmp The partition has a linux unstallation on it .Source drive is 3TB, destination drive is 1TB. Therefore what I am going to do is to clone the partition sda1 from the source drive,

vps - How can I stop a currently active DDoS attack?

My VPS is under a DDoS attack. I cannot access RDP, and I cannot take it offline, or access it in any way at all. What can I do? They are not trying to bruteforce, just trying to stop access to the VPS. I don't know if maybe the datacenter messed up or something, but the VPS is online and denying all requests as is normal when under DDoS. Is there anything that I or my dedicated hosting provider can look at in the logs? How should I approach forensics after the fact? I don't know much about DDoS attacks, do they usually stop after a few days? Is there an existing anti-ddos program or something? Answer There is no easy way to stop DDoS attacks. Get in touch with your provider and ask them for help. No program will help you against a DDoS which is intended to consume your bandwidth, you can only absorb these attacks by having more capacity and working with your upstream providers to dismantle the attack.

security - Weak points of ssh tunnel and x11 server, hack investigation

So, today I've been hacked . It's very puzzling to me how it was done, so I'm looking for experienced people to show weak points in design of my systems. I have two servers. One is VPS with connection to internet ( server1 ), second one is server inside private network ( server2 ), connected to the first one via ssh reverse tunnel, exposing ports 22 (ssh) and 5900 (x11vnc) to the internet. Both servers are Ubuntu 14.04 . I use these commands to create ssh reverse tunnel (on the server2): autossh -fR \*:4202:localhost:22 -N root@server1.com autossh -fR \*:5900:localhost:5900 -N root@server1.com A little more specifics about SSH configuration on my servers. Allows root login Has this line: GatewayPorts clientspecified Full configuration Doesn't allow root login. Uses default (stock) ssh configuration I use common (batman related:) ) username with 9 characters password. Full configuration As for x11 , I use this command to create x11vnc server: /usr/bin/x11vnc -dontdisco

ext4 - How does SSD meta-data corruption on power-loss happen? And can I minimize it?

Note: This is a follow-up question to Is there a way to protect SSD from corruption due to power loss? . I got good info there but it basically centered in three area, "get a UPS", "get better drives", or how to deal with Postgres reliability. But what I really want to know is whether there is anything I can do to protect the SSD against meta-data corruption especially in old writes. To recap the problem. It's an ext4 filesystem on Kingston consumer-grade SSDs with write-cache enabled and we're seeing these kinds of problems: files with the wrong permissions files that have become directories (for example, toggle.wav is now a directory with files in it) directories that have become files (not sure of content..) files with scrambled data The problem is less with these things happening on data that's being written while the drive goes down, or shortly before. It's a problem but it's expected and I can handle that in other ways. The bigger surp

amazon ec2 - One EC2 source with distributed varnish machines

I have a web site hosted in an EC2 instance (2008 r2 + iis7.5 + sql server). I put one linux box running RHEL with varnish. After some configuration trail and error, I found a configuration that works. Now I want to duplicate the varnish boxes to other availability zones, but continue to pull the pages from the original windows box. It is my understanding that I can put the varnish boxes in different zones and pull from the windows box via it's external IP. But what do I need to do in order for each user to receive content from the box physically closest to them? Is this even possible? Thank you! Answer Why would you not use Amazon's CloudFront for this? You're already trusting Amazon, and they support cusotm origins and caching dynamic objects. Don't build your own CDN, there's no way you can do it cheaper or better than the CDNs already out there.

storage - ML350 G5 - upgrade hot swap drives to 6 Gb/s?

I support a non profit. They got an ML350 G5 donated to them and I immediately put it to good use for them :) It came with four HP 300GB 15K SAS drives - and of course, they have all now failed. I have been replacing the drives and they are all 6Gb/s drives, but the HP controller in the ML350 is a 3Gb/s controller. I have a nice 8 port LSI MegaRAID SAS controller sitting here with battery backup and everything - can I just swap out the controllers and expect everything to work? I haven't found much about the differences between 3 and 6 Gb/s SAS interface from a physical layer. As near as I can tell, it should work just fine. I'm just curious if anyone has any experience doing this. I'm not hung up on the drives being hot swappable, so if not I may just see if I can physically alter the case, toss the HP drive cage and stuff a different cage or mounting system in there to accommodate the better controller with the newer drives. Ugh - now I remember why I prefer white

subdomain - SSL and domain masking

Ok, my scenario is interesting. What I want to do is create multiple subdomains for a give url. For example subdomain1.domain.com and subdomain2.domain.com. I plan to buy an SSL certificate that convers unlimited subdomains for domain.com. However I don't want these to appear as subdomains, I want to give them each their own url. For this I plan to use url masking. Which means that at any given time you could visit the subdomain address and see the same content displayed as you would it's respective domain name that's masked on top of it. I know that the other domain names themselves will not show the SSL cert, however will the data still be secure considering it's actually on a subdomain that is SSL certified? Remember it's only url masking. Is my logic correct that it will be even though it doesn't show that it is? I mean if you visit the subdomain1.domain.com you would see the cert. But just not if you visit its respective masked url.

hard drive - HP 4TB SATA Midline in D2600 - what disk make is HP using & why WD RE4 don't work?

Does anyone know what make of disk HP are using in their 4TB SATA Midline & why WD RE4 don't work? So we use D2600 crates with HP 2TB disks, and just buy WD2003FYYS - pop them in old HP caddies. bonus: 5yr warranty, half price of HP. We just tried a set of 4TB disks WD4000FYYZ, and each time the D2600 is powered up, random disks are labled as failed (different every restart). Odd, as D2600 sees disks and creates arrays fine - it just incorrectly fails disks that on restart. If anyone knows what disks HP ship, that would be great as we can buy those Does anyone know if firmware updates for D2600 are likely to make these 4TB disks work in the future, or why the disks might be causing the error. Guess I've a dozen 4TB disks that don't work. Thanks! Robin [summary: no answers as of 01/05/2012: My testing indicates non-HP WD2003FYYS work fine in MSA60 an D2600, WD4000FYYZ are seen only in the D2600, but are failed on server-restart (ie work in neither)

linux - Looking for script to pull server stats and display on a web page I host

Does anyone know of a bash or similar script that will pull stats from a Linux server and display them on a centralized web page or send a report via email every "x" hours? I've found services that offer something close but they are cost prohibitive and missing some of the stats I'd want and some functions just don't work, etc. The idea would be to pull: - CPU load - RAM and SWAP - Disk used / free - TX and RX for a given time period (year to date, month, week and day) - Active processes - IPs attempting login (failures from secure log on my CentOS boxes) I'd like to pull the reports to a central server and display them on a web page but having the reports emailed on a timed basis would be better than nothing.

Limit number of connections to a MySQL database

Is it possible to limit the number of concurrent connections to one MySQL database, regardless of which user is connecting to it? Answer No, this is not possible. You can limit the connection one user can have or globally limit the overall connections one instance will accept. You cannot limit connections to one specific database regardless of the user.

Blank Page: wordpress on nginx+php-fpm

Good day. While this post discusses a similar setup to mine serving blank pages occasionally after having made a successful installation, I am unable to serve anything but blank pages. There are no errors present in /var/log/nginx/error.log , /var/log/php-fpm.log or /var/log/nginx/us/sharonrhodes/blog/error.log . Wordpress 3.0.4 nginx 0.8.54 php-fpm 5.3.5 (fpm-fcgi) Arch Linux php-fpm.conf: [global] pid = run/php-fpm/php-fpm.pid error_log = log/php-fpm.log log_level = notice [www] listen = 127.0.0.1:9000 listen.owner = www listen.group = www listen.mode = 0660 user = www group = www pm = dynamic pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 500 nginx.conf: user www; worker_processes 1; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_

domain name system - Why is DNS failover not recommended?

From reading, it seems like DNS failover is not recommended just because DNS wasn't designed for it. But if you have two webservers on different subnets hosting redundant content, what other methods are there to ensure that all traffic gets routed to the live server if one server goes down? To me it seems like DNS failover is the only failover option here, but the consensus is it's not a good option. Yet services like DNSmadeeasy.com provide it, so there must be merit to it. Any comments?

ubuntu - Port is listening but cannot connect from remote

I have a catalyst server running on a VM. [info] Hello powered by Catalyst 5.90103 HTTP::Server::PSGI: Accepting connections at http://0:3009/ And connecting within the VM vagrant@precise32:/var/log/apache2$ curl 'http://localhost:3009' DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" etc ... The port seems to be listening vagrant@precise32:/var/log/apache2$ netstat -an | grep "LISTEN " tcp 0 0 0.0.0.0:36300 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:3009 0.0.0.0:* LISTEN Connecting remotely I can s

domain name system - What are SPF records, and how do I configure them?

This is a canonical question about setting up SPF records . I have an office with many computers that share a single external ip (I'm unsure if the address is static or dynamic). Each computer connects to our mail server via IMAP using outlook. Email is sent and received by those computers, and some users send and receive email on their mobile phones as well. I am using http://wizard.easyspf.com/ to generate an SPF record and I'm unsure about some of the fields in the wizard, specifically: Enter any other domains who may send or relay mail for this domain Enter any IP addresses in CIDR format for netblocks that originate or relay mail for this domain Enter any other hosts which can send or relay mail for this domain How stringent should SPF-aware MTA's treat this? the first few questions i'm fairly certain about... hope i have given enough info. Answer SPF records detail which servers are allowed to send mail for your domain. Questions 1-3 really su

windows server 2003 - Why won't my router forward ports correctly?

I have a Linksys RV042 dual-wan router (which directly responds to any traffic at *.*.*.* ) and my FTP server is running Windows Server 2003 R2 SP2 and IIS. My server's local IP address is *.*.*.* My router's port forwarding configuration looks like this: DNS [UDP/53~53]-> *.*.*.* HTTP [TCP/80~80]-> *.*.*.* FTP [TCP/20~21]-> *.*.*.* The forwarded port configuration looks like this: MXToolBox.com reports that my ports are open. My server responds perfectly to ftp:// / from any computer on my local network. Anonymous access to my FTP server is allowed from anywhere, and my server responds to the stardard FTP ports: 20-21 But when anyone tries to access ftp://joinedsoftware.com/ there is no response. I have tested DNS from internal and external computers, and everything seems to resolve without any problems. Using SmartFTP, this is what the log shows: [12:45:20] SmartFTP v4.0.1122.0 [12:45:21] Resolving host name "joinedsoftware.com" [12:45:21]

domain name system - How Long Will A DNS Change Take

If I'm going to make a DNS change to an A record for my domain (changing from one IP to another), how long can I expect until people are moved over to the new info? Is it simply <= the TTL? I know it used to take a while, but in 2009 how long should I expect? Answer Theoretically everyone should see the updated A record somewhere between instantly and the relevant TTL value. Most registrars set the TTL to 24 hours IIRC, so for 24 hours some people will see the old address and some will see the new one and by 24 hours after the change everyone should have the new address, with some instead using a lower value like 4 hours. If you have access to change the TTL values (i.e. you run you own DNS servers like I do) then you can reduce the TTLs down to something small a day or so before you make your change so the propogation period is much lower. I say "theoretically" above as there will always be some bugs, glitches, and badly configured caches out there tha

Nginx has ssl module, but thinks it doesn't

I'm adding an SSL domain to a host. When I try to restart Nginx, it protests: Restarting nginx: [emerg]: the "ssl" parameter requires ngx_http_ssl_module in /etc/nginx/sites-enabled/my_site_conf:16 This is frustrating because I actually rebuilt Nginx and Passenger specifically in order to make ngx_http_ssl_module available; it wasn't in the running instance, so I took the opportunity to build with Nginx 0.8.53 and Passenger 3.0.1. The relevant configuration block is server { listen 0.0.0.0:443 default ssl; server_name our-site.com; ssl_certificate /etc/nginx/ssl/our-site.crt; ssl_certificate_key /etc/nginx/ssl/our-site.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; ...etc. etc. (obviously I've obscured the host name, don't need this question coming up in searches fo

If I set a new linux copy up, and then set the hostname in /etc/sysconfig/network (CentOS) is that my FQDN?

Is my full qualified domain name my host name, if I set up a new server and set it in /etc/sysconfig/network? I have two VM's setup on my LAN, and I noticed that one saw the other as puppet-db.apt15 which I am guessing is my FQDN? Does the apt15 get appended from my router? Answer You'll also need to change it in /etc/hosts. Once you've done this, a simple "service network restart" should sort you out. Yes, your FQDN is your full domain name, complete with the last dotted part(s) (i.e., for a server named "fancypants", while you might refer to it as "fancypants" casually, its FQDN is actually "fancypants.somedomain.net"). Type "hostname" to see what your server thinks it is.

apache 2.2 - SSH to different devices behind router using domain names without Nginx/Reverse Proxy?

I'd like to access multiple different devices residing on my home network, from anywhere else in the world, using custom domain names. I'm running a standard (commercial) router provided by my ISP, and don't have the option to change (they won't provide login info for the ADSL connection required by a third-party router). I'm aware of how to access things like my Raspberry Pi running web services with Nginx/Apache/Reverse proxy, but would like similar functionality (access to individual devices via unique domain names) for things like SSH too. Reverse Proxy's typically only handle HTTP(S) traffic, so aren't applicable for this scenario. One comment below mentions IPv6, but how would this actually be achieved through standard commercial routers?

Should I move servers and change email address after email spoofing?

I'm hoping the community can help me shed some light on a recent email spoof. Yesterday my client woke up to find hundreds of bounced failure notices. The client did not personally send any of these emails. Each failure notice had a different reply-to address i.e. xyxyxs@client-domain.co.uk trg@client-domain.co.uk hjd@client-domain.co.uk The various reply-to addresses suggest that only the clients domain had been spoofed and not a specific email account (i.e actual-email@client-domain.co.uk). I know if your email account has been spoofed, it's game over and you need to create a new email address. However, a specific address hasn't been targeted. Am I correct in thinking that I do not need to delete and create a new email address? I also assume the domain would have been widely blacklisted? Should I move hosting companies and would this make a difference? Either way, I'll be implementing DKIM. Sorry for so many questions, I'm just a little lost as the spoofer

mod rewrite - Preventing vulnerability scripts from scanning apache server

Quick question for you all - fairly frequently in my httpd logs I see things like this: 66.11.122.194 - - [29/Jan/2010:11:06:44 +0000] "GET HTTP/1.1 HTTP/1.1" 400 418 "-" "Toata dragostea mea pentru diavola" 66.11.122.194 - - [29/Jan/2010:11:06:44 +0000] "GET /roundcube//bin/msgimport HTTP/1.1" 404 417 "-" "Toata dragostea mea pentru diavola" 66.11.122.194 - - [29/Jan/2010:11:06:44 +0000] "GET /rc//bin/msgimport HTTP/1.1" 404 413 "-" "Toata dragostea mea pentru diavola" 66.11.122.194 - - [29/Jan/2010:11:06:44 +0000] "GET /mss2//bin/msgimport HTTP/1.1" 404 415 "-" "Toata dragostea mea pentru diavola" 66.11.122.194 - - [29/Jan/2010:11:06:45 +0000] "GET /mail//bin/msgimport HTTP/1.1" 404 415 "-" "Toata dragostea mea pentru diavola" 66.11.122.194 - - [29/Jan/2010:11:06:45 +0000] "GET /mail2//bin/msgimport HTTP/1.1" 404 416 "-&q

windows server 2008 r2 - Exchange 2010 send from multiple domains

We have a Windows 2008 Enterprise R2 SP1 server with multiple accepted domains configured on our Exchange 2010 console. Configuration of exchange 2010: In exchange console, under organization configuration > hub transport > accepted domains, we have: domain1 > authoritative > default = true domain2 > authoritative > default = false domain3 > authoritative > default = false domain4 > authoritative > default = false We are able to RECEIVE e-mails on ALL the above domains. Just to be clear: I can receive emails to userX@domain1.com , userX@domain2.com, userX@domain3.com and userX@domain4.com without any problems. I am able to send email from userX@domain1.com (the default domain). However , when trying to send emails from userX@domain2.com, userX@domain3.com, and userX@domain4.com, I receive the following error: Delivery has failed to these recipients or groups: destination_example_email You can't send a message on behalf of this user unless you have

security - Securing SSH server against bruteforcing

I have a little SVN server, old dell optiplex running debian. I don't have that high demands on my server, because its just a little SVN server... but do want it to be secure. I just renewed my server to a newer and better optiplex, and started looking a bit into the old server. I took it down after experiencing problems. When I check the logs, its full of brute-force attempts and somehow someone has succeeded to enter my machine. This person created some extra volume called "knarkgosse" with two dirs "root" and "swap1" or something. Don't really know why and what they do, but sure do want to prevent this from happening again. I find this a bit strange though because I change my password ever few months or so, and the passwords are always random letters and numbers put together... not easy to brute-force. I know I can prevent root from logging in, and use sudoers... and change the SSH port, but what more can I do? So I have a few questions: How can

Windows Active Directory naming best practices?

This is a Canonical Question about Active Directory domain naming. After experimenting with Windows domains and domain controllers in a virtual environment, I've realized that having an active directory domain named identically to a DNS domain is bad idea (Meaning that having example.com as an Active Directory name is no good when we have the example.com domain name registered for use as our website). This related question seems to support that conclusion , but I'm still not sure about what other rules there are around naming Active Directory domains. Are there any best practices on what an Active Directory name should or shouldn't be? Answer This has been a fun topic of discussion on Server Fault. There appear to be varying "religious views" on the topic. I agree with Microsoft's recommendation : Use a sub-domain of the company's already-registered Internet domain name. So, if you own foo.com , use ad.foo.com or some such. The most vil

linux - How do I allow sendmail to send TO any address?

UPDATE 10/21/2010 5p: Ok, so sending mail does work, but not sending mail to tom@wtw3.com from this box does not. The A record for wtw3.com points to the development box, but the MX records resolve to Google's servers. Is this causing the issue? How do I tell? (Verbose mail output below) [root@dev ~]# mail -v tom@wtw3.com Subject: Test Test Cc: tom@wtw3.com... Connecting to [127.0.0.1] via relay... 220 dev.tridiumtech.com ESMTP Sendmail 8.13.8/8.13.8; Thu, 21 Oct 2010 17:02:05 -0400 >>> EHLO dev.tridiumtech.com 250-dev.tridiumtech.com Hello localhost [127.0.0.1], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 250-DSN 250-ETRN 250-DELIVERBY 250 HELP >>> MAIL From: SIZE=37 250 2.1.0 ... Sender ok >>> RCPT To: >>> DATA 550 5.1.1 ... User unknown 503 5.0.0 Need RCPT (recipient) >>> RSET 250 2.0.0 Reset state >>> RSET 250 2.0.0 Reset state root... Using cached ESMTP connection to [127.0.0.1] via rel

linux - Server monitoring for medium scale UNIX network

I'm looking for suggestions for a good monitoring tools, or tools, to handle a mixed Linux (RedHat 4-5) and HPUX environment. Currently we are using Hobbit which is working reasonably well but it is becoming harder to keep track of what alerts are sent out for what servers. Features I'd like to see: Easy configuration of servers. The ability to monitor CPU, network, memory, and specific processes I've looked into Nagios but from what I have seen it won't be easy to set up the configuration for all of our servers ~200 and that without installing a plugin into each agent I won't be able to monitor processes. Answer Set up SNMP on your servers, preferably via some configuration management tool like Puppet . Then, use a monitoring tool like Zenoss Core to monitor them. Zenoss can scan a subnet for hosts, which makes it easy to add 200 servers, and you can group/organize the servers in various ways, to determine what exactly is monitored. We'r

windows server 2008 - Security Cert issue with DNS Alias for RDP

I wonder if someone can help me - I'm a complete newbie when it comes to administering servers so apologies if I'm missing something obvious... We've decided on a naming convention which uses Elements from the periodic table (with VM Hosts as molecules) While this is okay for the time being, I'm not relishing the prospect of typing "Unununium" or "Ununnulium" when we've got that many servers. So... i've added some DNS entries for the chemical symbols (He->Helium, Li->Lithium, etc.) When i attempt to RDP to the servers using the DNS alias, I get a certificate warning as (obviously) the name I'm connecting to doesn't match with the server in question. Whilst this is only an annoyance for RDP I'm assuming it may have implications if i use the shortened names for other purposes. So, my question is - Is it possible to have the server I'm connecting to use a certificate which covers both? or have 2 certs side-by-side and auto-

CentOS Adding Hard Disks

I currently have a server with 500GB storage (2 physical disks, raid 0) and its already full. I've asked my provider for an upgrade of additional 1TB storage (2 physical, raid 0). These are all hardware based raid. Almost all files from /usr/local/nginx/html are videos and have consumed the first hard disk raid. Now I wanna know that if I purchased this additional hard disk, would any file saved into the same directory be automatically saved into the newly added hard disk? Because what Im doing is Im hosting video files on that directory, and I want to continue saving on that particular directory only. Answer You'd be better served by backing up your data, and having your host re-provision (meaning re-install the OS) the server with (4) 1TB drives in RAID10 for fault-tolerance and speed. This usually doesn't add much to the bottom-line monthly price of the server but if your host doesn't have an inventory of 1TB drives, picking them up at current market pr

hardware - What risks are there with mixing SSD models in RAID?

Aside from one type of disk bottlenecking the other, are there any other problems with mixing SSD models in RAID? My problem is, I need to upgrade the storage in a server with 4x Samsung 845DC EVO 960GB in RAID10. These drives are not available anymore, so my options are to either use some newer comparable SSD's or to replace the array altogether. Answer The single biggest thing that crosses my mind isn't SSD-specific: that the biggest danger with RAID is that all the devices in any given RAID are often purchased from the same manufacturer, at the same time, and therefore tend to get to the far end of the bathtub curve and start dying at about the same time. In that sense, buying from different vendors is not only not a bad idea, but best practice. You don't say whether you're doing hardware or software RAID. If it's hardware, you have the issue of whether the new models are supported by the controller, both from a hardware support contract standpo

linux - Setting the hostname: FQDN or short name?

I've noticed that the "preferred" method of setting the system hostname is fundamentally different between Red Hat/CentOS and Debian/Ubuntu systems. CentOS documentation and the RHEL deployment guide say the hostname should be the FQDN : HOSTNAME= , where should be the Fully Qualified Domain Name (FQDN), such as hostname.example.com , but can be whatever hostname is necessary. The RHEL install guide is slightly more ambiguous: Setup prompts you to supply a host name for this computer, either as a fully-qualified domain name (FQDN) in the format hostname.domainname or as a short host name in the format hostname . The Debian reference says the hostname should not use the FQDN : 3.5.5. The hostname The kernel maintains the system hostname . The init script in runlevel S which is symlinked to " /etc/init.d/hostname.sh " sets the system hostname at boot time (using the hostname command) to the name stored in " /etc/hostname ".

networking - Does LACP routing type have to be the same on all ends?

i was wondering if the routing mechanism for LACP (source / destination MAC, source+destination MAC, source / destination IP, source+destination IP) has to be the same within one LACP trunk* between two devices multiple LACP trunks* but one logical path across multiple devices also: when using auto LACP, does a negotiation happen so the devices automatically use the same routing strategie? what's the worst that could happen if the routing mechanisms don't fit? *I'm using the term "trunk" here in the means of "grouping multiple physical cables for the goal of redundancy and higher troughput" Answer You need to think of LACP as a "verification mechanism" of link aggregation. You will not achieve any better performance whether you use a static LAG or whether you use an LACP LAG. What you will get is faster failover, and some intelligence that is checking to make sure that the links are functional before introducing them into the LAG

linux - Docker: Map container port to single IPv6-address on host

now that some of my server applications are packed into Docker containers, I'm trying to deploy them on my production servers. My containers should be accessible by IPv4 and IPv6 at the same time. Usually that is no problem: If you map container ports to host ports e.g. via docker-compose, Docker will use available IPv6 and IPv4 addresses. My problem is: There is not only one IPv4 and IPv6 address available on my server, but multiple. My application container should only use one specific IPv4 address and one specific IPv6 address of the host. You can bind a container port to a IPv4-address by using the following docker-compose syntax: ports: - "127.0.0.1:8001:8001" (See https://docs.docker.com/compose/compose-file/#ports ) Unfortunately I couldn't find any information how to do that with IPv6 addresses. Is there any way I can bind a container port to a single, specific IPv6 address on my Docker host? Answer As of version 1.15 of docker-compose (

linux - Repartitioning two disks without a loss of data

I am interning at a software company and I have hit somewhat of a brick wall. Here is the deal: The Problem: We have some boxes around here that were incorrectly partitioned for 2 x 500 GB drives. The actual drives are 2 x 1 TB drives. These are essentially machines with only half of their available disk space being used. I am tasked with writing a script to re-partition these drives. Solution Thus Far: I have a script that disables all process and reboots, and then another script that fixes the partitions. The problem is that there is a loss of data. What I'm Looking For: I need a solution that does this but saves all the data. My first though would be to just grow the partitions to their appropriate size, but I'm not sure if that is possible. The other solution is to copy all data onto Disk2, partition Disk1, move data back to Disk1, and finally partition Disk2. The problem is that I am pretty new to Linux and I don't really know how to do it. I have access to the fdisk u

linux - Do you NEED BIND running next to Apache on a production env (apache is using virtualhosts)?

Good afternoon to you all. My question is perhaps a simple one. Say I have a webserver running (Linux + Apache) I have a few domains I'd like to point to this machine. all great and dandy BUT! Do I need a dns server like BIND to be running as well? or can I just host multiple websites using just apache and the virtual hosts? thanks guys!

hard drive - How can 8 SAS lanes support 1024 disks?

According to the LSI 9207-8e specs then it supports up to 1024 disks of 6Gb/s, which I don't quite understand, when it is only an 8 lane JBOD card. In my case, I have an HP D6000 with 70 disks as JBOD, and have excellent read/write performance. It would be temping to think that only 8 disks of the 1024 gets full bandwidth, but just can't be the case, or is it for some reason not an issue? If I bought the 12Gb/s LSI JBOD version, would I then get better performance when the disks are still 6Gb/s? Answer How can 8 SAS lanes support 1024 disks? Using SAS expanders - each SAS channel can theoretically support 65,536 devices per link using expanders - 8 channels can easily support 1024 disks, though it would be horribly contended with 128 disks per channel. If I bought the 12Gb/s LSI JBOD version, would I then get better performance when the disks are still 6Gb/s? Each channel will run at 6Gbps because that's the speed of the slowest device on the ch

linux - Apache server keeps crashing regularly

This just started happening three weeks or so ago. The content of my website hasn't changed, it's just a phpBB forum using MySQL as a backend. Nothing has changed in well over a year as far as content, pages served, etc. but recently, every two days or so, the server just shuts down and cannot be accessed at all (FTP, HTTP, MySQL), I have to notify my service provider to physically restart the machine. I had thought it was tied to these SIGTERM errors I find in the logs but I found elsewhere that the SIGTERM is likely my provider restarting the server for me. Problem is I have no idea how to fix these kinds of things or find the root cause as my skills in this area are lacking. My service provider has basically told me that they don't offer the kind of support I need for the package I have (VPS) and I that I pretty much have the keys to the whole thing and am on my own. Anyone have any ideas what could be going on? Apache/2.2.3 (CentOS) 20051115 Linux 2.6.18-028stab057.4 #