Skip to main content

Posts

Showing posts from April, 2015

mysql - How to avoid VMware stunning a client during imaging with Veeam

Recently our MySQL server has been "going away" (ie. the client connection drops out). After weeks of trying different things (like adjusting packet size), we've discovered that it's our Veeam imaging backups which use the VMWare API to snapshot and copy the vmdks etc. We are using ESXi 5 with a Centos 6.4 guest, running (pretty much) only MySQL 5.1.69-log. The change which seemed to initiate this problem was increasing the physical disk size to 300GB, from about 100, and resizing the guest filesystem to use most of the new capacity. Ever since the disk was increased, we've been getting these problems during backups - presumably due to the increase time it takes to perform snapshot related functions. The new disks are 2x300GB Gen8 15k SAS in RAID1. The old disks would have been similar only smaller. The target of the Veeam process is a ReadyNAS over a 1Gb dedicated ethernet (i.e. separated from general office traffic). The host is an HP DL380P tower: ==server spec

linux - What is the "slash" after the IP?

In Amazon EC2, where I set "security groups", It says: Source: 0.0.0.0/0 And then it gives an example of: 192.168.2.0/24 What is "/24"? I know what port and IP is. Answer It represents the CIDR netmask - after the slash you see the number of bits the netmask has set to 1. So the /24 on your example is equivalent to 255.255.255.0. This defines the subnet the IP is in - IPs in the same subnet will be identical after applying the netmask. Take AND to mean bitwise &. Then: 192.168.2.5 AND 255.255.255.0 = 192.168.2.0 192.168.2.100 AND 255.255.255.0 = 192.168.2.0 but, for example: 192.168.3.100 AND 255.255.255.0 = 192.168.3.0 != 192.168.2.0 The most common CIDR netmasks are probably /32 (255.255.255.255 - a single host); /24 (255.255.255.0); /16 (255.255.0.0); and /8 (255.0.0.0). I think it's easier to make sense of the numbers if you remember that 255.255.255.255 can be written as FF.FF.FF.FF - and F is of course the same as binary 1111. So

nameserver - Dns and custom name servers not updating?

I need a dns expert, cause I'm thoroughly confused right now... We've got about 30 sites that are registered to custom name servers : Ns1.vertigo.bm and ns2.vertigo.bm Vertigo.bm points to name servers at site5, with the other sites supposedly being passed on with the custom name servers. Now a lookup of vertigo.bm will give me : DNS servers ns2-merton.webserversystems.com [174.120.194.4] ns1-merton.webserversystems.com [174.120.194.3] Which in turn should pass on to the custom name servers, however doing a lookup with a domain that has these custom name servers (bprfc.bm for example) gives us this : DNS servers ns2.vertigo.bm [174.120.16.36] ns1.vertigo.bm [174.120.16.35] Those are the old ip address for the old server ... The registrar has said they've updated the name server, but I don't see any updating! Help! Answer DNS data is cached in servers around the world. Checking your servers for changes is likely to mislead you as to whether a change has be

centos5 - PhantomJS on CentOS 5.5 (glibc and libstdc++ versions)

I'm trying to run PhantomJS on CentOS, but I get the following: ./phantomjs: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by ./phantomjs) ./phantomjs: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./phantomjs) ./phantomjs: /lib64/libc.so.6: version `GLIBC_2.7' not found (required by ./phantomjs) ./phantomjs: /lib64/libc.so.6: version `GLIBC_2.7' not found (required by /home/bamboo/bamboo-data/xml-data/build-dir/PHANTOMJS-ARTIFACT-JOB1/target/checkout/dists/linux_x64/bin/../lib/libQtGui.so.4) ./phantomjs: /lib64/libc.so.6: version `GLIBC_2.11' not found (required by /home/bamboo/bamboo-data/xml-data/build-dir/PHANTOMJS-ARTIFACT-JOB1/target/checkout/dists/linux_x64/bin/../lib/libQtGui.so.4) ./phantomjs: /lib64/libc.so.6: version `GLIBC_2.9' not found (required by /home/bamboo/bamboo-data/xml-data/build-dir/PHANTOMJS-ARTIFACT-JOB1/target/checkout/dists/linux_x64/bin/../lib/libQtGui.so.4) ./phantomjs: /lib64

domain name system - Why do I need to set a hostname?

I know there's quite a few questions about host names. But even after reading them, I didn't really understand the concept of host names entirely. So here's my question: I've been following this guide in setting up a VPS with Linode. The first step is to set a host name. From what I understand, a host name is just an arbitrary name you can set to identify your machine in a network. Also, the FQDN is the host name plus the domain name (which can or can not be related to web domains hosted on the server). Please correct me if I'm wrong. Then it instructs me to modify /etc/hosts and add in something like: 12.34.56.78 plato.example.com plato So my question is, what exactly does this line accomplish? I've done it before but never really understood what it did. Also, if the host name and the domain name used in the FQDN is just arbitrary, where can they be used? Actual use cases would be very helpful and detailed explanation would be great. Thanks! Answer

permissions - How can I share a live BZR repository with multiple users?

I'm not sure how best to ask this question. Over several years I've developed my way into a corner and need to figure some things out. I almost certainly haven't been following best practices up until now but there you go. I make and host Django websites on my own Linux (Ubuntu) server. I manage their version control with Bazaar and upload over SSH+BZR. They all go into a parent directory imaginatively called /websites/ . The production copies are just master BZR branches (not exports). I don't run any sort of FTP server, just SSH. My workflow is I edit a local copy of a website, commit the change. Because they're all bound branches, the commit it pushed to the server automatically and that has a hook which then runs an update, which in turn decides whether or not the Django site needs to be reloaded. I've just scripted things so they work for me. All the websites' files are owned by my user account oli . The websites currently all run under that account to

ssl - What will happen if client call Apache server by IP and there are two SNI virtual hosts

We have a Apache 2.4 web server with a couple of virtual hosts with different certificates. I have set up SNI name based virtualhosts : ap.mmm.com and ac.mmm.com, it's working great. All on same IP (172.12.12.1) and same 443 port. The question is : what will happen if client will use IP and not server name to get to the Apache server : I.e will use 172.12.12.1:443 instead of ap.mmm.com ?

storage - Hardware raid controller that supports RAID1 with three active drives?

We have decided that RAID1 would be the best fit for our usage scenario for the following reasons: 1). Our overall storage requirements are relatively modest (< 500GB) 2). RAID1 offers the most simplicity in terms of controller overhead and ease of recovery from a failed drive The only issue that worries us is simultaneous drive failure. In a standard two drive RAID1 setup this would be terminal. So the question is, are there any hardware raid controllers that allow the mirroring of 3 drives (preferably with a 4th acting as a hot-spare)? The probability of three drives failing simultaneously is sufficiently tiny not to bother us. As far as I can tell this might be possible in software raid but doesn't seem to be an option if you want to use a hardware controller? Any advice (or alternative approaches) welcome!

linux - How do i smoothly update ntpd's peer list?

I have a network of solaris/linux servers that have ntpd configured to use a single internal server of stratum 2 through a DNS alias/CNAME. This is server is down for some time and the client servers' clocks are out of sync. Since we have another internal server of stratum 1 (PPS), the DNS CNAME has been modified to point to the new server (which is up). But using ntpq -p i can see that the client servers are still pointing to the old server. It looks like they are not resolving again the peer name, so they don't get the new server IP. How do i smoothly update ntpd's peer list ? If i restart (x)ntpd, it's going to create timejumps. I wish ntpd would have updated its peer list / configuration and smoothly synced with the new server. Answer ntpdc can do this for you -- specifically the addpeer and unconfig commands. Basically update your config file, then use ntpdc to add the new peers and remove ("unconfigure") the old ones ( after ntpd acce

Granting FTP users access to multiple folders on CentOS

I'm have a web server running CentOS Linux 7.2.1511 . I do most of the mundane management tasks through Plesk 12.5.30 Update #29 but I also SSH in and get dirty with the command line when I need to. This server is running several websites. I have several different contractors, each that are working on their own set of websites. E.g. ContractorA works on WebSite1, WebSite2 and Website3. ContractorB works on Website1 and Website4. All websites are exist in their own directories under /var/www/vhosts . E.g. /var/www/vhosts/website1.com /var/www/vhosts/website2.com How do I grant each contractor access to their respective sites without granting them access to all websites? I dont want to share credentials between users (i.e. create one FTP account per website and pass those out). I also need this to be scalable. I will be adding more contractors and more websites and I will need to be able to grant any contractor access to any website. As far as I am aware I can only set one home dire

High Availability with 3 servers: To virtualise or not?

We're changing hosts for our SAAS app (IIS+MSSQL) and have an opportunity to redesign the infrastructure. Either stick with what we have (which works well) or virtualise with vSphere. Current: 2x Web/DB Servers Each have IIS/MSSQL installed. Windows Network Load Balancing to distribute traffic between the 2 nodes with a virtual IP address & MSSQL Mirroring with automatic failover for the DB. 1x MSSQL Witness Server (small VM) If one server fails, NLB reroutes traffic to the other node and MSSQL automatically fails over. There's maybe 40 seconds downtime while NLB redirects. Possible: 2x vSphere Hosts Firewall VM – 1 vCPU, 512MB RAM, 20GB HDD Web Server VM – 1 vCPU, 2GB RAM, 50GB HDD DB Server VM – 2 vCPU, 4GB RAM, 100GB HDD 1x CentOS Linux SAN (mounted as NFS shares) Concerns are not enough resources for the DB & web. Currently the Web/DB server makes full use of the node and only has to share a node if one fails. What if the SAN fails? Was advised that the VMs HDD

networking - Static virtual IP in debian 6.0.4

In debian 6.0.4 my static ip is 192.168.1.151 and I want to add one more ip 192.168.1.175 as virtual ip. For did I made following changes in /etc/networking/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 #NetworkManager#iface eth0 inet dhcp auto eth0 iface eth0 inet static address 192.168.1.151 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 auto eth0:0 iface eth0:0 inet static address 192.168.1.175 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 And when I run command /sbin/ifconfig -a I get the below information eth0 Link encap:Ethernet HWaddr 44:87:fc:eb:b2:50 inet addr:192.168.1.151 Bcast:192.168.1.255 Mask:255.255.

ubuntu - How to avoid outgoing emails to be flagged as spam by google?

I use Exim4 as MTA on Ubuntu server. The problem is that all outgoing emails are being flagged as spam by google mail. It is very annoying. I appreciate your hints and possible solutions. Answer One thing to look at is there being a valid reverse name for your mail server. If your mail server is "mail.mycompany.com" yet when resolved, has a different name, some servers, like mine, will reject your email. Another thing to verify is your mail server having an SPF record . Little more information may help resolve this.

timeout - TIME_WAIT Info (reduce)

I have a ajax application that makes a request every 3 seconds, the requested page sets the header to (header("Connection: Keep-Alive, close");) then performs a database query and returns the latest data. The value for TIME_WAIT is 60 seconds, so even tough I close the connection in my requested page(i.e "Connection: Keep-Alive, close"), the connection seem to be present for the next 60 seconds (this occurs for every Ajax request that I make), so for 1 minute 20 requests are made and total TIME_WAIT for that IP seems to be around 20 Is it possible to reduce the TIME_WAIT to say 15 seconds, to reduce the overall TIME_WAITS or is it possible to force a connection close after every Ajax request Any help will be appreciated Thanks

hp - Which controller for external SAS tape drive

We're about to buy an external LTO-4 tape drive for our HP DL380g6 server, and since SCSI is on the way out, we'll buy a SAS drive, probably the HP StoreEver LTO-4 Ultrium 1760 SAS External Tape Drive (but feel free to recommend something else). Now we haven't got any SAS ports available, so also need to buy a controller card. Would the following controllers work (listed from cheap to expensive)? Are these cards reliable? Or would you recommend other cards? Lycom PE-123e HP Smart Array P212/Z HP SC44Ge Host Bus Adapter HP SC08Ge Host Bus Adapter If the only thing it has to control is a single external tape drive is there any reason to go for more expensive models? Which cable do I need?

linux - Amazon Micro Instance Crashed - Help Me Figure Out Why?

I am running an Amazon AWS Micro Linux Instance and it crashed during some "light" usage a few days ago. I am running an app that uploads photos to the server. We had maybe 10 users uploading multiple photos during a 1 hour period. At some point, the server stopped responding. I logged into the AWS Console and found that the "Intstance Reachability Check" had failed. I rebooted the server, restarted PHP and MySQL, and then had to repair a few MySQL tables that had been corrupted. I had the monitoring tools turned on and the CPU Usage indicates that we topped out at 28% CPU Usage - After reading some more documentation about Micro Instances, I do not think we maxed out on the CPU, but I could be wrong. I don't know a enough to understand what the logs mean. I have found what I believe to be the logs from the server from the time where the problem occurred, I am hoping that someone can help me decipher what happened: Jul 23 00:19:07 ip-10-117-66-219 kernel: [196

networking - CIDR for Dummies

I understand what CIDR is, and what it is used for, but I still can't figure out how to calculate it in my head. Can someone give a "for dummies" type explanation with examples? Answer CIDR (Classless Inter-Domain Routing, pronounced "kidder" or "cider" - add your own local variant to the comments!) is a system of defining the network part of an IP address (usually people think of this as a subnet mask). The reason it's "classless" is that it allows a way to break IP networks down more flexibly than their base class. When IP networks were first defined, IPs had classes based on their binary prefix: Class Binary Prefix Range Network Bits A 0* 0.0.0.0-127.255.255.255 8 B 10* 128.0.0.0-191.255.255.255 16 C 110* 192.0.0.0-223.255.255.255 24 D 1110* 224.0.0.0-239.255.255.255 E 1111* 240.0.0.0-255.25

windows server 2012 - IPv6 Client Service

I got a Windows Server 2012 which provides DHCPv4 and DHCPv6. The NIC on the Server got 2 IPv6 addresses. I found out that the second IPv6 comes from the "IPv6 Client Service" which (only the service - not the IPv6) I do need for another Service, but I don't need the given IPv6 address - in fact the v6 IP generates some errors. I know theres a netsh command which deletes this address, but unfortunately the service creates another one after a while. Is there a way to disable the generation of another IPv6 from the "IPv6 Client Service" without disabling the whole service? /edit: Beschreibung. . . . . . . . . . . : Gigabit-Netzwerkverbindung Intel(R) 82574L Physische Adresse . . . . . . . . : ??-??-??-??-??-?? DHCP aktiviert. . . . . . . . . . : Nein Autokonfiguration aktiviert . . . : Ja IPv6-Adresse. . . . . . . . . . . : 2a02:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx(Bevorzugt) IPv6-Adresse. . . . . . . . . . . : 2a02:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx(Bevorzugt) Leas

ubuntu - How to make nameservers based on my domain name in Amazon EC2?

I purchased Amazon EC2 instance and installed Ubuntu Server on it. I have domain name example.com , the current nameserver of my domain is : ns1.hostcompany.com , ns2.hostcompany.com . I want to point mydomain.com to the Instance and make nameservers like ns1.mydomain.com and ns2.mydomain.com. I have an Elastic IP associated with the instance. Answer http://aws.amazon.com/route53/ I'd use route 53, cheap and easy to setup dns inside of amazon aws.

linux - Missing Ram, memory leak?

I'm running OpenSuse 11 on an HP proliant server with 8G of RAM, occasionally the gnome desktop [whatever is current version] After a while it will just run out of RAM, there will be like 6 or 700k left available and all the running services don't account for what's missing. If I shut down services in order of physical memory used I still don't find that missing ram till I reboot.... It's a development server - so I can knock it around a bit... but it's kinda irritating having to reboot it periodically... Anyway - here are the main [important] services in order of : coldfusion 9 [java] Mysql 5 Apache 2 Postfix samba fetchmail Gome - occasionally I was wondering if there were any known issues with suse or any of the services that can cause sucha severe memory issue - generally all those services running should not add up to more than 2G. Here's what I get after a reboot: suse:~ # free total used free shared buffers cached M

linux - df vs du. Is my disk really full?

Strange problem... Why i have full / partition used, but it's not really used? Fast info: xwing ~ # df -h Filesystem Size Used Avail Use% Mounted on rootfs 16G 15G 75M 100% / /dev/root 16G 15G 75M 100% / devtmpfs 5,9G 0 5,9G 0% /dev tmpfs 5,9G 552K 5,9G 1% /run rc-svcdir 1,0M 72K 952K 8% /lib64/rc/init.d cgroup_root 10M 0 10M 0% /sys/fs/cgroup shm 5,9G 0 5,9G 0% /dev/shm cachedir 4,0M 4,0K 4,0M 1% /lib64/splash/cache /dev/sda1 124M 43M 76M 36% /boot /dev/sda5 63G 25G 36G 42% /home /dev/sda6 483G 147G 312G 33% /mnt/data tmpfs 8,0G 0 8,0G 0% /var/tmp/portage Maybe i-nodes? Noo... xwing ~ # df -i Filesystem Inodes IUsed IFree IUse% Mounted on rootfs 1048576 548459 500117 53% / /dev/root 1048576 548459 500117 53% / devtmpfs 1525561 517 1525044 1% /dev tmpfs 1525918 37

linux - How can I sort du -h output by size

I need to get a list of human readable du output. However, du does not have a "sort by size" option, and piping to sort doesn't work with the human readable flag. For example, running: du | sort -n -r Outputs a sorted disk usage by size (descending): du |sort -n -r 65108 . 61508 ./dir3 2056 ./dir4 1032 ./dir1 508 ./dir2 However, running it with the human readable flag, does not sort properly: du -h | sort -n -r 508K ./dir2 64M . 61M ./dir3 2.1M ./dir4 1.1M ./dir1 Does anyone know of a way to sort du -h by size? Answer As of GNU coreutils 7.5 released in August 2009, sort allows a -h parameter, which allows numeric suffixes of the kind produced by du -h : du -hs * | sort -h If you are using a sort that does not support -h , you can install GNU Coreutils. E.g. on an older Mac OS X: brew install coreutils du -hs * | gsort -h From sort manual : -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)

bind - BIND9 private DNS server with OpenVPN config file errors

I am setting up a private DNS server that will only be accessible by my OpenVPN users. OpenVPN is setup on the tun0 interface of my Ubuntu 14 server. The issue is that I am getting some errors. I am unsure how to resolve them. Also I am unsure if if configured my BIND9 server correctly. Here is what I get when I test my configs: named-checkconf: Nothing... named-checkzone one.example.com db.one.example.com: db.one.example.com:17: ignoring out-of-zone data (jeannicolas.com.air.jn) zone one.example.com/IN: loaded serial 3 OK named-checkzone 8.10.in-addr.arpa /etc/bind/zones/db.10.8: dns_rdata_fromtext: /etc/bind/zones/db.10.8:6: near eol: unexpected end of input zone 8.10.in-addr.arpa/IN: loading from master file /etc/bind/zones/db.10.8 failed: unexpected end of input zone 8.10.in-addr.arpa/IN: not loaded due to errors. ifconfig details for OpenVPN: tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.25

How to upgrade OpenSSL on Ubuntu Server 12.04 LTS? (Heartbleed)

How do I upgrade OpenSSL using Ubuntu's repository? I see the USN at http://www.ubuntu.com/usn/usn-2165-1/ and the package here https://launchpad.net/ubuntu/+source/openssl/1.0.1-4ubuntu5.12 but I can't find instructions on how to use that to upgrade my OpenSSL version. The standard update commands don't upgrade my version of ssl: sudo apt-get update sudo apt-get dist-upgrade openssl OpenSSL> version OpenSSL 1.0.1 14 Mar 2012 How do I get the latest version from the repository? dpkg --list openssl ||/ Name Version Description +++-=========================-=========================-================================================================== ii openssl 1.0.1-4ubuntu5.12 Secure Socket Layer (SSL) binary and related cryptographic tools aptitude show libssl1.0.0 Package: libssl1.0.0 State: installed Automatically installed: no Multi-Arch: same Version: 1.0.1-4ubuntu5.12 Priority: req

networking - KVM bridge network initial setup - 2 public IPs

I have actually two problems with my server. The setup is like this: server in a data center connected directly to the internet one network card / two public IPs (one IP the server gets over DHCP, the other has to be configured manually) Debian wheezy as kvm host Debian wheezy as guest First problem is that I don't get network connection on the guest at all. Second problem is that I want the guest to responde to one of the IP addresses. The second IP I want to use to manage the host. Let's start with the first problem. Here is the interfaces file of the host: auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto eth0:0 iface eth0:0 inet static address XX.YYY.ZZZ.161 network XX.YYY.ZZZ.161 netmask 255.255.255.255 broadcast XX.YYY.ZZZ.255 gateway AA.BBB.CCC.1 auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 With this configuration br0 gets the IP address from DHCP and I can reach the host server on both IP addresses.

linux - mdadm & raid6: does "re-create(with different chunk size) + resync" destroy orig. data?

I would like to rescue data on my software raid-6 array. I did some stupid actions (described bellow) with this original array. Main Question: I need to know if original data stored on raid-6 array are definitely lost (or not) after the following actions have been prepared upon this array (executed in the order listed bellow): zeroing super-blocks of all the active disks/partitions registered in the array executing the "mdadm --create ..." command using different options (see bellow for list) than have been used when array have been created originally: -> different chunk size -> different layout -> different disks order resync-ing the array Note: Specific values of mdadm parameters should not be relevant here, since this is about the principle how mdadm works ... I think points 1) & 2) should not even touch the original data since they are supposed to manipulate just superblocks I see the 3) point as most critical from data lost point of view: I'm not sure w

linux - What's the best way of handling permissions for Apache 2's user www-data in /var/www?

Has anyone got a nice solution for handling files in /var/www ? We're running Name Based Virtual Hosts and the Apache 2 user is www-data . We've got two regular users & root. So when messing with files in /var/www , rather than having to... chown -R www-data:www-data ...all the time, what's a good way of handling this? Supplementary question: How hardcore do you then go on permissions? This one has always been a problem in collaborative development environments. Answer Attempting to expand on @Zoredache's answer , as I give this a go myself: Create a new group (www-pub) and add the users to that group groupadd www-pub usermod -a -G www-pub usera ## must use -a to append to existing groups usermod -a -G www-pub userb groups usera ## display groups for user Change the ownership of everything under /var/www to root:www-pub chown -R root:www-pub /var/www ## -R for recursive Change the permissions of all the folders to 2775 chmod 2775 /var/www ## 2=set

amazon web services - AWS DNS DDoS mitigation

I have an AWS server that is currently under DDoS via DNS amplification. I've setup CloudWatch logs for the VPC ACL and it's logging an enormous amount of rejected DNS traffic. Despite that traffic being rejected, my primary server is unreachable. I have a secondary server on the same VPC and subnet that can be reached without any problem. Why is it that I can access one but not the other? The ACL should be filtering the traffic at the subnet level. So if one is unreachable then they both should be unreachable, but that's not the case. And how does one mitigate a DNS amplification attack on AWS? AWS certainly has big enough pipes. Why is the ACL not doing the job? Answer I ended up solving the issue. There were a couple issues actually. I had only blocked UDP port 53 (DNS) and as it turns out there were other ports being attacked. Since my server is just a web server I was able to block all UDP traffic in the ACL. That solved one side of the attack. T

Spam emails regarding Domain Abuse Notices

I have received domain abuse notice email from chloe-gray@icann-monitor.org . The mail asks to download a Word Document which I believe contains a virus. Dear Domain Owner, Our system has detected that your domain: example.com is being used for spamming and spreading malware recently. You can download the detailed abuse report of your domain along with date/time of incidents. Click Here We have also provided detailed instruction on how to delist your domain from our blacklisting. Please download the report immediately and take proper action within 24 hours otherwise your domain will be suspended permanently. There is also possibility of legal action depend on severity and persistence of your abuse case. Three Simple Steps: Download your abuse report. Check your domain abuse incidents along with date and time. Take few simple steps for prevention and to avoid domain suspension. Click Here to Download your Report Please look into it and cont

linux - After hardware RAID array expansion fdisk wont allow me to use additional available sectors

We have a large ~18TB hardware raid array on a Dell R720xd. Currently the RAID5 array consists of 6x4TB and I needed to extend it. Step 1 expand the hardware raid array. Simple enough if you have the dell admin tools installed. omconfig storage vdisk action=reconfigure controller=0 vdisk=1 raid=r5 pdisk=0:1:0,0:1:1,0:1:3,0:1:3,0:1:4,0:1:5,0:1:8,0:1:9 ( new disks were the last two, which can be confirmed by using the omreport tool) That all went fine though it takes a while, and I was able to confirm the array had been expanded.. % omreport storage vdisk controller=0 vdisk=1 Virtual Disk 1 on Controller PERC H710P Mini (Embedded) Controller PERC H710P Mini (Embedded) ID : 1 Status : Ok Name : bak State : Ready Hot Spare Policy violated : Not Assigned Encrypted : No Layout : RAID-5 Size

linux - Where is this cron job running from?

Some time ago, I set up a cron job to run a script every minute. Since then, I've upgraded the system from ubuntu intrepid to ubuntu karmic. Now the job is failing. I get email about it once a minute. No problem, right? I can just solve the problem and go on my way. Well, ok, but I don't have time to do that right now, so I just want to shut the job off until I get around to it. Ok, here's where it gets odd. I can't find the cron job. It's not in the crontab (under /var/spool/cron ) for the user (or any other user). It's not in /etc/crontab. It's not in /etc/cron.d. I've recursively grepped the whole of /etc/ and /var/ for the name of the script. Can't find it. I've run lsof on the pid of the cron to see if it has some weird location I'm unfamiliar with open, no go. What am I missing here? The job is running. It's being run by cron (Email comes from cron - I can see it in the system logs running under cron), but it doesn't appear to e