Skip to main content

Posts

Showing posts from September, 2018

active directory - Forest trust on the same subnet to migrate users

Wondering if anyone could offer some advice on something. I have a domain that desperately needs to be upgraded. Typically, one would add a new DC with a reduced functional level to the domain, transfer roles, remove the old DC, raise the functional level and be done with it but I am left in a situation where I cannot adprep or forestprep the existing domain controller due to ages of mismanagement and poor maintenance leaving broken/untouchable objects in AD. I have tried every fix I could find even resorting to trying to make manual changes to the AD hive. Admittedly, this is probably how my predecessor broke it in the first place :/ My alternative option is now to create a fresh, new domain as we have a small environment. What I would like to do is create a trust between the old forest and new forest (2003 R2 and 2012 R2) and use ADMT to migrate/copy use

active directory - Forest trust on the same subnet to migrate users

Wondering if anyone could offer some advice on something. I have a domain that desperately needs to be upgraded. Typically, one would add a new DC with a reduced functional level to the domain, transfer roles, remove the old DC, raise the functional level and be done with it but I am left in a situation where I cannot adprep or forestprep the existing domain controller due to ages of mismanagement and poor maintenance leaving broken/untouchable objects in AD. I have tried every fix I could find even resorting to trying to make manual changes to the AD hive. Admittedly, this is probably how my predecessor broke it in the first place :/ My alternative option is now to create a fresh, new domain as we have a small environment. What I would like to do is create a trust between the old forest and new forest (2003 R2 and 2012 R2) and use ADMT to migrate/copy users with their sIDHistory to the new domain in the new forest so everyone can just keep their existing profiles. The problem I can

centos - How bad is it really to install Linux on one big partition?

itemprop="text"> We will be running CentOS 7 on our new server. We have 6 x 300GB drives in raid6 internal to the server. (Storage is largely external in the form of a 40TB raid box.) The internal volume comes to about 1.3TB if formatted as a single volume. Our sysadmin thinks it is a really bad idea to install the OS on one big 1.3TB partition. I am a biologist. We constantly install new software to run and test, most of which lands in /usr/local. However, because we have about 12 non-computer savvy biologists using the system, we also collect a lot cruft in /home as well. Our last server had a 200GB partition for /, and after 2.5 years it was 90% full. I don't want that to happen again, but I also don't want to go against expert advice! How can we best use the 1.3TB available to make sure that space is available when and

centos - How bad is it really to install Linux on one big partition?

We will be running CentOS 7 on our new server. We have 6 x 300GB drives in raid6 internal to the server. (Storage is largely external in the form of a 40TB raid box.) The internal volume comes to about 1.3TB if formatted as a single volume. Our sysadmin thinks it is a really bad idea to install the OS on one big 1.3TB partition. I am a biologist. We constantly install new software to run and test, most of which lands in /usr/local. However, because we have about 12 non-computer savvy biologists using the system, we also collect a lot cruft in /home as well. Our last server had a 200GB partition for /, and after 2.5 years it was 90% full. I don't want that to happen again, but I also don't want to go against expert advice! How can we best use the 1.3TB available to make sure that space is available when and where it's needed but not create a maintenance nightmare for the sysadmin?? Answer The primary (historical) reasons for partitioning are: to separate the op

debian - Working from the prompt, but not from the cron job

I created a script which takes the IP configuration as input ifconfig | /usr/bin/python "/home/michel/Python/sendIp.py" When i type that in in the command prompt, the script gets executed well, and the result of the ifconfig script is available in my script. However, when I insert it in my cron (with crontab -e) like this, it does not read the ifconfig input * * * * * ifconfig | /usr/bin/python "/home/michel/Python/sendIp.py" The input is read in the script like this: data = sys.stdin.read() itemprop="text"> class="normal">Answer Try using the full path to ifconfig in your cronjob. [~]: which ifconfig /sbin/ifconfig

debian - Working from the prompt, but not from the cron job

I created a script which takes the IP configuration as input ifconfig | /usr/bin/python "/home/michel/Python/sendIp.py" When i type that in in the command prompt, the script gets executed well, and the result of the ifconfig script is available in my script. However, when I insert it in my cron (with crontab -e) like this, it does not read the ifconfig input * * * * * ifconfig | /usr/bin/python "/home/michel/Python/sendIp.py" The input is read in the script like this: data = sys.stdin.read() Answer Try using the full path to ifconfig in your cronjob. [~]: which ifconfig /sbin/ifconfig

ZFS delete snapshots with interdependencies and clones

itemprop="text"> Below is my list of ZFS volumes and snapshots, as well as the origin and clone for each. I want to delete all the snapshots, but keep all the filesystems. How can I do this? I have tried zfs promote followed by attempting to delete each filesystem for many different combinations of the filesystems. This shifts around where the snapshots "live"; for instance, zfs promote tank/containers/six moves snapshot F from tank/containers/three@F to tank/containers/six@F . The live data in the filesystem isn't modified (which is what I want!), but I still can't delete the snapshot (which is not what I want). A typical zfs destroy attempt tells me it has dependent clones, some of which (the snapshots) I do want to destroy, but others of which (the filesystems) I do no

ZFS delete snapshots with interdependencies and clones

Below is my list of ZFS volumes and snapshots, as well as the origin and clone for each. I want to delete all the snapshots, but keep all the filesystems. How can I do this? I have tried zfs promote followed by attempting to delete each filesystem for many different combinations of the filesystems. This shifts around where the snapshots "live"; for instance, zfs promote tank/containers/six moves snapshot F from tank/containers/three@F to tank/containers/six@F . The live data in the filesystem isn't modified (which is what I want!), but I still can't delete the snapshot (which is not what I want). A typical zfs destroy attempt tells me it has dependent clones, some of which (the snapshots) I do want to destroy, but others of which (the filesystems) I do not want to destroy. For example. # zfs destroy tank/containers/six@A cannot destroy 'tank/containers/six@A': snapshot has dependent clones use '-R' to destroy the following datasets: tank/contai

monitoring - Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: href="http://support.hyperic.com/display/hypcomm/Process+service" rel="nofollow noreferrer">http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an age

monitoring - Hyperic HQ- Monitor process statistics for 50+ processes on Linux machine

Is there an easy way to get metrics on all processes that start with the letters XYZ? I have about 80 processes that I have to monitor individually that all start with the prefix XYZ. I have created a query using the sigar shell: ps State.Name.sw=XYZ, which will give me a list of the processes that I want. What I need to do is define this list of processes through said query and collect and track statistics from the Process service: http://support.hyperic.com/display/hypcomm/Process+service What I need is 3 or 4 key statistics for each of the XYZ processes defined by my query to show up as graphs in the web front end. Note: Hyperic HQ server is installed on a windows machine and I'm monitoring a Linux box via an agent. Thanks, Chris Edit: Here is my try at a plugin that may give me what I want, but it's not being inventoried/detected by the Hyperic web UI. Simply pointing me to one of Hyperic's tutorials won't do. Thanks. ]> description="

SSL_read() failed (SSL: error:140943F2:SSL routines:SSL3_READ_BYTES:sslv3 error in nginx

itemprop="text"> 2017/05/30 09:44:59 [debug] 3486#3486: *1221 free: 000055D2824FBC40, unused: 24 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL certificate status callback 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL_do_handshake: -1 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL_get_error: 2 2017/05/30 09:57:01 [debug] 3486#3486: *1223 reusable connection: 0 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL handshake handler: 0 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL_do_handshake: 1 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD" 2017/05/30 09:57:01 [debug] 3486#3486: *1223 reusable connection: 1 2017/05/30 09:57:01 [debug] 3486#3486: *1223 http wa

SSL_read() failed (SSL: error:140943F2:SSL routines:SSL3_READ_BYTES:sslv3 error in nginx

2017/05/30 09:44:59 [debug] 3486#3486: *1221 free: 000055D2824FBC40, unused: 24 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL certificate status callback 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL_do_handshake: -1 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL_get_error: 2 2017/05/30 09:57:01 [debug] 3486#3486: *1223 reusable connection: 0 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL handshake handler: 0 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL_do_handshake: 1 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD" 2017/05/30 09:57:01 [debug] 3486#3486: *1223 reusable connection: 1 2017/05/30 09:57:01 [debug] 3486#3486: *1223 http wait request handler 2017/05/30 09:57:01 [debug] 3486#3486: *1223 malloc: 000055D282587F80:1024 2017/05/30 09:57:01 [debug] 3486#3486: *1223 SSL_read: -1 2017/05/30 09:57:01 [debug] 3486#34

networking - Realistic Network Load Testing

I am trying to benchmark an ASA under various conditions but what is throwing me off is my baseline seems to be odd. I am trying to load an ASA to full capacity. See the attached topology diagram: href="https://i.stack.imgur.com/kmRVP.png" rel="nofollow noreferrer"> src="https://i.stack.imgur.com/kmRVP.png" alt="Topology"> The players are: C1 a Linux client runs a continuous download of a 300 GB file and loops this from S1, a Linux server running HTTPD. C2 a Linux client also runs a continuous download of a 300 GB file and loops this from S2, a Linux server running HTTPD. C3 runs AB to try and generate more connections. ab -n100 -c99999999 http://10.0.0.57/ This is to S3, a Linux server running HTTPD. Cisco ASA 5520 running 8.4. What I found odd was that even wit

networking - Realistic Network Load Testing

I am trying to benchmark an ASA under various conditions but what is throwing me off is my baseline seems to be odd. I am trying to load an ASA to full capacity. See the attached topology diagram: The players are: C1 a Linux client runs a continuous download of a 300 GB file and loops this from S1, a Linux server running HTTPD. C2 a Linux client also runs a continuous download of a 300 GB file and loops this from S2, a Linux server running HTTPD. C3 runs AB to try and generate more connections. ab -n100 -c99999999 http://10.0.0.57/ This is to S3, a Linux server running HTTPD. Cisco ASA 5520 running 8.4. What I found odd was that even with all this going on the max I saw was just over 500 Mbps (observed via NLOAD on both VM box physical interfaces). Is this normal? Everything is Gig. Some questions: Is it likely that my crappy Linux desk switch is bottlenecking? Does NATing really kill performance that bad or is something else going on? The CPU on the Dispatch Process was 30% u

domain name system - Issue with www to non www redirect

itemprop="text"> I am on slicehost and I followed the articles that they gave for DNS redirection and the www to non www url redirection does work. However, what if I want a www.domain.com to be the default domain. Would I put www.domain.com. as my DNS record name or would I keep domain.com. as my DNS record and then do something else. Basically, what happens is if someone goes to the url www.domain.com/directory/something.html they will be redirected to domain.com and not domain.com/directory/something.html I would like the second thing to happen, not just go to domain.com and call it a day. I am running nginx and am confounded on how to solve this issue. I'm not sure whether its an nginx issue or a dns issue. Any help would be greatly appreciated! itemprop="text">

domain name system - Issue with www to non www redirect

I am on slicehost and I followed the articles that they gave for DNS redirection and the www to non www url redirection does work. However, what if I want a www.domain.com to be the default domain. Would I put www.domain.com. as my DNS record name or would I keep domain.com. as my DNS record and then do something else. Basically, what happens is if someone goes to the url www.domain.com/directory/something.html they will be redirected to domain.com and not domain.com/directory/something.html I would like the second thing to happen, not just go to domain.com and call it a day. I am running nginx and am confounded on how to solve this issue. I'm not sure whether its an nginx issue or a dns issue. Any help would be greatly appreciated! Answer From the nginx documentation : server { listen 80; server_name nginx.org; rewrite ^ http://www.nginx.org$request_uri?; } server { listen 80; server_name www.nginx.org; ... }

linux - How much memory can I lock before the system starts swapping/thrashing?

I'm trying to use href="http://pyropus.ca/software/memtester/" rel="nofollow noreferrer" title="memtester">Memtester as a memory stress and correctness test for my organization's Linux boxes. Memtester basically just takes in an amount of memory to test as an argument, locks that much memory using memlock() , and then runs a number of patterns to verify that the memory is good. Since I'm trying to verify correctness, I want to be testing as much of the machine's memory as possible. I've been trying to do this by passing it MemFree from /proc/meminfo. Or rather, I have a script that spawns multiple processes, each asking for MemFree (see below), because the OS doesn't allow a single process to lock more than 50% of memory. The problem is, if I lock more than ~90% of memory, my computer lo

linux - How much memory can I lock before the system starts swapping/thrashing?

I'm trying to use Memtester as a memory stress and correctness test for my organization's Linux boxes. Memtester basically just takes in an amount of memory to test as an argument, locks that much memory using memlock() , and then runs a number of patterns to verify that the memory is good. Since I'm trying to verify correctness, I want to be testing as much of the machine's memory as possible. I've been trying to do this by passing it MemFree from /proc/meminfo. Or rather, I have a script that spawns multiple processes, each asking for MemFree (see below), because the OS doesn't allow a single process to lock more than 50% of memory. The problem is, if I lock more than ~90% of memory, my computer locks up, presumably due to thrashing. About 30 minutes later I'm finally able to use it again. Is there a way, programmatically or otherwise, to find out how much memory I can lock before it starts swapping? I want this to run on any Linux box, so anything that

apache 2.2 - How do I install a newer version of Apache2 if apt-get does not automatically find it?

itemprop="text"> I've installed apache2 on my ubuntu machine using the apt-get package manager. It installed apache 2.2.16. I'd like to upgrade to the latest (or at least a newer version) of apache2 but apt-get upgrade and update don't seem to find a newer version. When I type apt-get install -s apache2 It tells me apache2 is already the newest version. Do I need to download this package manually? Is there a reason to not do this? Here is the version of Ubuntu I am running: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.10 DISTRIB_CODENAME=maverick DISTRIB_DESCRIPTION="Ubuntu 10.10" class="post-text" itemprop="text"> class="normal">Answer When using package repositories, you're at the mercy of the repository managers for upgrades.

apache 2.2 - How do I install a newer version of Apache2 if apt-get does not automatically find it?

I've installed apache2 on my ubuntu machine using the apt-get package manager. It installed apache 2.2.16. I'd like to upgrade to the latest (or at least a newer version) of apache2 but apt-get upgrade and update don't seem to find a newer version. When I type apt-get install -s apache2 It tells me apache2 is already the newest version. Do I need to download this package manually? Is there a reason to not do this? Here is the version of Ubuntu I am running: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.10 DISTRIB_CODENAME=maverick DISTRIB_DESCRIPTION="Ubuntu 10.10" Answer When using package repositories, you're at the mercy of the repository managers for upgrades. In the vast majority of cases, this is a very good thing, as they do a lot of testing on packages and interactions between packages before releasing a new revision into the repo. This prevents you from shooting yourself in the foot in many ways. If you really need bleeding-edge versions, yo

linux - IPv6 prefix management

So, my SiXXS' POP seems to be in trouble and I was thinking in changing to HE. The idea is connect to HE, change radvd setup and... lots of other thinks: UFW, specially in laptops, which only allows access to some development services from some RFC1918 addresses and to my global IPV6 addresses My servers have fixed IPv6 addresses to easily DNS setup Some software needs some type of reference to the "local" addresses in setup (like squid acls or libvirt networks) etc. So my question is: what is the best way to deal with this?, let's suppose that tomorrow I need to change my tunnel broker, or for whatever reason I need to change my prefix and use another provider as a backup, do I really need to review all my setup? The only solution I can think of is ULAs and NAT which I dislike (or ULAs plus global addr

linux - IPv6 prefix management

So, my SiXXS' POP seems to be in trouble and I was thinking in changing to HE. The idea is connect to HE, change radvd setup and... lots of other thinks: UFW, specially in laptops, which only allows access to some development services from some RFC1918 addresses and to my global IPV6 addresses My servers have fixed IPv6 addresses to easily DNS setup Some software needs some type of reference to the "local" addresses in setup (like squid acls or libvirt networks) etc. So my question is: what is the best way to deal with this?, let's suppose that tomorrow I need to change my tunnel broker, or for whatever reason I need to change my prefix and use another provider as a backup, do I really need to review all my setup? The only solution I can think of is ULAs and NAT which I dislike (or ULAs plus global addresses but I think this setup is not recommended) (A possible solution if I understand correctly would be Mobile IPv6, but is this really an option today?, how many prov

performance - Apache consuming too much CPU and memory

itemprop="text"> I am having some troubles with CPU loading an memory with Apache Web Server. We are running a Ubuntu Server 12.04 LTS on a Virtual Machine. Our server have the following specs: 8GB RAM; 4 vCPUs ( 12ghz ); We configured the server to run a Drupal (7.23) based website . So, we installed Apache, PHP, MySQL... The versions are below: Apache 2.2.22; PHP 5.3.10 ( The PHP are running as Apache Module. ); APC 3.1.7; MySQL 5.5.31 (all innodb tables); I am running some apache modules too. Take a look ( apachectl -M ): core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) actions_module (shared) alia

performance - Apache consuming too much CPU and memory

I am having some troubles with CPU loading an memory with Apache Web Server. We are running a Ubuntu Server 12.04 LTS on a Virtual Machine. Our server have the following specs: 8GB RAM; 4 vCPUs ( 12ghz ); We configured the server to run a Drupal (7.23) based website . So, we installed Apache, PHP, MySQL... The versions are below: Apache 2.2.22; PHP 5.3.10 ( The PHP are running as Apache Module. ); APC 3.1.7; MySQL 5.5.31 (all innodb tables); I am running some apache modules too. Take a look ( apachectl -M ): core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) actions_module (shared) alias_module (shared) authz_host_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) include_module (shared) mime_module (shared) php5_module (shared) proxy_module (shared) proxy_http_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared

Safe to point www to root/naked domain on Azure?

itemprop="text"> My client would like to use the root/naked domain name of his site, rather than the www subdomain. I am not overly savvy on the fine points on DNS. Is it safe to create a CNAME record that points the www subdomain to the root? i.e. rel="nofollow noreferrer">http://www.example.com --> href="http://example.com" rel="nofollow noreferrer">http://example.com I have seen examples redirecting TO the www, but not the reverse. If it matters, the site is on Azure, and DNS settings are hosted at dreamhost. Many thanks Answer CNAME s from root to a subdomain are href="https://serverfault.com/questions/274106/is-there-any-way-to-point-the-root-domain-to-a-cname">not good , but the reverse seems harmless. You can create a CNAME for

Safe to point www to root/naked domain on Azure?

My client would like to use the root/naked domain name of his site, rather than the www subdomain. I am not overly savvy on the fine points on DNS. Is it safe to create a CNAME record that points the www subdomain to the root? i.e. http://www.example.com --> http://example.com I have seen examples redirecting TO the www, but not the reverse. If it matters, the site is on Azure, and DNS settings are hosted at dreamhost. Many thanks Answer CNAME s from root to a subdomain are not good , but the reverse seems harmless. You can create a CNAME for www as you described and DNS should be ok. But now I can hit your site as example.com or www.example.com . This may not be as neat as your client desires. A better solution would be to create an A record for www which points at a redirect server which will bounce the users to the naked domain.

domain name system - Internal and External DNS from Different Servers, Same Zone

itemprop="text"> I am either having trouble understanding how DNS works, or I am having trouble configuring my DNS correctly (either one isn't good). I am currently working with a domain, I'll call it webdomain.com , and I need to allow all of our internal users to get out to dotster to get our public DNS entries just like the rest of the world. Then, on top of that, I want to be able to supply just a few override DNS entries for testing servers and equipment that is not available publically. As an example: public.webdomain.com - should get this from dotster outside.webdomain.com - should get this from dotster as well testing.webdomain.com - should get this from my internal dns controller The problem that I seem to be running into at every turn is that if I have an internal DNS

domain name system - Internal and External DNS from Different Servers, Same Zone

I am either having trouble understanding how DNS works, or I am having trouble configuring my DNS correctly (either one isn't good). I am currently working with a domain, I'll call it webdomain.com , and I need to allow all of our internal users to get out to dotster to get our public DNS entries just like the rest of the world. Then, on top of that, I want to be able to supply just a few override DNS entries for testing servers and equipment that is not available publically. As an example: public.webdomain.com - should get this from dotster outside.webdomain.com - should get this from dotster as well testing.webdomain.com - should get this from my internal dns controller The problem that I seem to be running into at every turn is that if I have an internal DNS controller that contains a zone for webdomain.com then I can get my specified internal entries but never get anything from the public DNS server. This holds true regardless of the type of DNS server I use also--I have