Skip to main content

Posts

Showing posts from December, 2017

Can't connect to MySQL 5.5 with SSL

I am trying to get MySQL SSL replication setup for two brand new RHEL 6.6 x64 servers. I have replication working without SSL, but I can't get it set up with SSL, and can't connect directly using SSL either. I have tried connecting from both the master and the slave with mysql -h x.x.x.x -u root -p --ssl=1 --ssl-ca=ca.pem --ssl-cert=client-cert.pem --ssl-key=client-key.pem , as well as locally (Windows + MySQL Workbench), no matter what, I get: ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1) Since RHEL came with MySQL 5.1, I upgraded both master and slave to 5.5 per href="https://webtatic.com/packages/mysql55/" rel="nofollow noreferrer">https://webtatic.com/packages/mysql55/ , mysql --version for both now shows "Ver 14.14 Distrib 5.5.43, for Linux (x86_64) using

Can't connect to MySQL 5.5 with SSL

I am trying to get MySQL SSL replication setup for two brand new RHEL 6.6 x64 servers. I have replication working without SSL, but I can't get it set up with SSL, and can't connect directly using SSL either. I have tried connecting from both the master and the slave with mysql -h x.x.x.x -u root -p --ssl=1 --ssl-ca=ca.pem --ssl-cert=client-cert.pem --ssl-key=client-key.pem , as well as locally (Windows + MySQL Workbench), no matter what, I get: ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1) Since RHEL came with MySQL 5.1, I upgraded both master and slave to 5.5 per https://webtatic.com/packages/mysql55/ , mysql --version for both now shows "Ver 14.14 Distrib 5.5.43, for Linux (x86_64) using readline 5.1" Then I tried to set up SSL with self-signed certs and replication based on the tutorial at https://www.howtoforge.com/how-to-set-up-mysql-database-replication-with-ssl-encryption-on-centos-5.4 I made sure to use different Common Nam

domain name system - 1server 2ips => 2nameservers possible?

I rented a vServer with 2 ip addresses. I installed ISPConfig on it and I am now configuring the server. Now I am having a lot of troubles in configuring the DNS Zones to make my nameservers work. I am not a network specialist, that's why I hope that someone here can answer my question. Is it possible on one server with two ips to set-up two working nameservers? Using www.dnssy.com to test my site, the last result I achieved was as follows: Your NS records at your nameservers are: ns2.mydomain.ch reported: ns1.mydomain.ch reported: ns1.mydomain.ch [100.100.100.101] TTL 86400 ns2.mydomain.ch [100.100.100.102] TTL 86400 that's not what I want. I would like the second ip to go under my ns2 If you want additional information, just ask in a comment and I will then edit this post as fast as p

domain name system - 1server 2ips => 2nameservers possible?

I rented a vServer with 2 ip addresses. I installed ISPConfig on it and I am now configuring the server. Now I am having a lot of troubles in configuring the DNS Zones to make my nameservers work. I am not a network specialist, that's why I hope that someone here can answer my question. Is it possible on one server with two ips to set-up two working nameservers? Using www.dnssy.com to test my site, the last result I achieved was as follows: Your NS records at your nameservers are: ns2.mydomain.ch reported: ns1.mydomain.ch reported: ns1.mydomain.ch [100.100.100.101] TTL 86400 ns2.mydomain.ch [100.100.100.102] TTL 86400 that's not what I want. I would like the second ip to go under my ns2 If you want additional information, just ask in a comment and I will then edit this post as fast as possible :)

How can I optimize IIS warmup on servers with many sites? (E.g. staged warmup.)

itemprop="text"> Some of our web servers host quite a few sites. In day-to-day events this has not much effect, as all pages are delivered reasonably fast and the server's resources are well-dimensioned. However, when the host machine needs to restart (for example for system updates), warming up all sites can take considerable amounts of time – sometimes over an hour before all warmup is completed. Presumably, because over ten sites try to grab CPU time for compilation and loading. Searching around the web, suggestions for faster warmup revolve around the idea of a server hosting only few sites, but what is a good approach for servers with many of them? We considered trying to stage warmup, so that no more sites get processed at a given time than the CPU has room for – means that the last site probably is not ready

How can I optimize IIS warmup on servers with many sites? (E.g. staged warmup.)

Some of our web servers host quite a few sites. In day-to-day events this has not much effect, as all pages are delivered reasonably fast and the server's resources are well-dimensioned. However, when the host machine needs to restart (for example for system updates), warming up all sites can take considerable amounts of time – sometimes over an hour before all warmup is completed. Presumably, because over ten sites try to grab CPU time for compilation and loading. Searching around the web, suggestions for faster warmup revolve around the idea of a server hosting only few sites, but what is a good approach for servers with many of them? We considered trying to stage warmup, so that no more sites get processed at a given time than the CPU has room for – means that the last site probably is not ready a lot sooner, but the first sites will be there quickly. Already a lot better than the all-or-nothing-like free for all. Does IIS provide staged warmimg-up on IIS startup? Answer

networking - Datacenter ISP wants to assign an IP range to an existing ip address that we already use

itemprop="text"> I have a question about the setup of our data center regarding a new IP range. The situation is as follows. we own/use a range x.x.x.x/28 we got it already from our data center. we have 16 addresses we already use for our servers (13 actually without network, broadcast and default gateway) Now we want to order another range y.y.y.y/27 that has 32-3 usable addresses I presume. Because they cannot assign this range to the same network port where x.x.x.x/28 is assigned to, they ask me for an ip address within the x.x.x.x/28 range, let's say a.a.a.a. to route the new range (y.y.y.y/27) to. The other solution is to hire a new network port to which the y.y.y.y/27 range can be routed through normally. My account manager is a bit vague and we can't afford to switch data center ri

networking - Datacenter ISP wants to assign an IP range to an existing ip address that we already use

I have a question about the setup of our data center regarding a new IP range. The situation is as follows. we own/use a range x.x.x.x/28 we got it already from our data center. we have 16 addresses we already use for our servers (13 actually without network, broadcast and default gateway) Now we want to order another range y.y.y.y/27 that has 32-3 usable addresses I presume. Because they cannot assign this range to the same network port where x.x.x.x/28 is assigned to, they ask me for an ip address within the x.x.x.x/28 range, let's say a.a.a.a. to route the new range (y.y.y.y/27) to. The other solution is to hire a new network port to which the y.y.y.y/27 range can be routed through normally. My account manager is a bit vague and we can't afford to switch data center right now. So my question is; how do I configure this? does a.a.a.a become the default gateway somehow? I can't get a clear apprehension of how this technically should work. Is there a special name for this t

ssl - HAProxy redirect traffic to NGINX getting error "The plain HTTP request was sent to HTTPS port"

itemprop="text"> What we are trying to is to have HAProxy to listen for all incoming traffic from port 443 (HTTPS & WSS) Below is our HAProxy config: frontend wwws bind 0.0.0.0:443 ssl crt /etc/haproxy/server.pem timeout client 1h default_backend www_backend backend www_backend mode http stats enable stats uri /haproxy option forwardfor reqadd x-forwarded-proto:\ https server server1 backend:3000 weight 1 maxconn 8192 check 0.0.0.0:443 (e.g. https://example.com ) is our HA proxy server listening for all incoming 443 traffic backend:3000 is our nginx server which is set to listen for SSL connections The current problem we are facing right now is when we enter https://example.com , the browser is showing the following error: 400 Bad Request The plain HTTP request was sent to

ssl - HAProxy redirect traffic to NGINX getting error "The plain HTTP request was sent to HTTPS port"

What we are trying to is to have HAProxy to listen for all incoming traffic from port 443 (HTTPS & WSS) Below is our HAProxy config: frontend wwws bind 0.0.0.0:443 ssl crt /etc/haproxy/server.pem timeout client 1h default_backend www_backend backend www_backend mode http stats enable stats uri /haproxy option forwardfor reqadd x-forwarded-proto:\ https server server1 backend:3000 weight 1 maxconn 8192 check 0.0.0.0:443 (e.g. https://example.com ) is our HA proxy server listening for all incoming 443 traffic backend:3000 is our nginx server which is set to listen for SSL connections The current problem we are facing right now is when we enter https://example.com , the browser is showing the following error: 400 Bad Request The plain HTTP request was sent to HTTPS port nginx/1.7.5 It does seems like when haproxy forward the traffic to nginx (backend:3000) it converts to http. I thought "reqadd x-forwarded-proto:\ https " is suppose to make sure it is https

Truly understanding networking?

itemprop="text"> I understand the basics of networking such as Lan and stuff. I know what many of the protocols are and how to build a client/server socket program in C. But what I really want is a very good understanding of how networks actually work. Not only from a programming aspect but also from a application aspect. I am looking for some material(preferably a book) which will give me a very good foundation to go off of. I am in the middle of wanting to be a programmer or a UNIX admin, so I really should learn and know how to apply networking fundamentals. Does any such a concise resource exist? Would it be better going the more academic route by buying a networking book(such as those from Tanenbaum or Kurose), or is it better to go the It route possibly looking into network admin text or certification books. Thank you all so

Truly understanding networking?

I understand the basics of networking such as Lan and stuff. I know what many of the protocols are and how to build a client/server socket program in C. But what I really want is a very good understanding of how networks actually work. Not only from a programming aspect but also from a application aspect. I am looking for some material(preferably a book) which will give me a very good foundation to go off of. I am in the middle of wanting to be a programmer or a UNIX admin, so I really should learn and know how to apply networking fundamentals. Does any such a concise resource exist? Would it be better going the more academic route by buying a networking book(such as those from Tanenbaum or Kurose), or is it better to go the It route possibly looking into network admin text or certification books. Thank you all so much. Answer W. Richard Stevens' books 'UNIX Network Programming' and 'TCP/IP Illustrated' are must reads, no matter which career you go w

dell - SSD seems dead after wakeup from Windows Sleep, BIOS stalls but doesn't find it anymore

itemprop="text"> The morning, the following scary scenario happened: I woke up my Windows system Typed in my username and got an error (something like "could not load security xxx", but unsure of exact wording) System auto-restarted after cliking OK It didn't boot up anymore to the SSD with Windows 7 OS (I have another disk I can boot to, but that doesn't see the disk either). Obviously, this happened right after I instantiated a backup procedure, which hasn't succeeded either. The BIOS can't find the drive when I connect to SATA. And it can't find the drive when I connect it to SAS. I have a Dell Workstation T7400, most recent BIOS (version A06), version of SAS Host Bus Adapter BIOS (HBA) is MPTBIOS 6.14.10.00 (2007.09.29) from LSI Logic Corp. Other findings: When connecting

dell - SSD seems dead after wakeup from Windows Sleep, BIOS stalls but doesn't find it anymore

The morning, the following scary scenario happened: I woke up my Windows system Typed in my username and got an error (something like "could not load security xxx", but unsure of exact wording) System auto-restarted after cliking OK It didn't boot up anymore to the SSD with Windows 7 OS (I have another disk I can boot to, but that doesn't see the disk either). Obviously, this happened right after I instantiated a backup procedure, which hasn't succeeded either. The BIOS can't find the drive when I connect to SATA. And it can't find the drive when I connect it to SAS. I have a Dell Workstation T7400, most recent BIOS (version A06), version of SAS Host Bus Adapter BIOS (HBA) is MPTBIOS 6.14.10.00 (2007.09.29) from LSI Logic Corp. Other findings: When connecting to SATA, the DELL Logo screen stays really long (5 minutes) and then at the end of POST it says that a drive is not found When connecting to SAS, the SAS HBA initializing phase takes long (2 minutes,

nat - Trigger iptables masquerade before reaching service on gateway?

I have a STUN service running on the same machine that is the gateway for the LAN, I would like the results from the STUN service be the same for both internal and external machines. Currently since the masquerading is done in the postrouting rule when the packets leave the gateway the STUN server will just see the LAN IP/port instead of the natted ip/port. eth1 (lan): 10.0.0.1/32 eth0 (wan): 1.2.3.4/31 iptables -A POSTROUTING -t nat -o eth0 -j MASQUERADE iptables -A INPUT -p udp -d 1.2.3.4/31 --dport 3701 -j ACCEPT When a LAN machine with IP 10.0.0.2 contacts the STUN service at 1.2.3.4 the packets get through but the STUN service see that the packet gets sent from 10.0.0.2. How can I get the NAT translation to occur before the packet arrive to the STUN service and that the response back from the STUN service won't be from 10.0

nat - Trigger iptables masquerade before reaching service on gateway?

I have a STUN service running on the same machine that is the gateway for the LAN, I would like the results from the STUN service be the same for both internal and external machines. Currently since the masquerading is done in the postrouting rule when the packets leave the gateway the STUN server will just see the LAN IP/port instead of the natted ip/port. eth1 (lan): 10.0.0.1/32 eth0 (wan): 1.2.3.4/31 iptables -A POSTROUTING -t nat -o eth0 -j MASQUERADE iptables -A INPUT -p udp -d 1.2.3.4/31 --dport 3701 -j ACCEPT When a LAN machine with IP 10.0.0.2 contacts the STUN service at 1.2.3.4 the packets get through but the STUN service see that the packet gets sent from 10.0.0.2. How can I get the NAT translation to occur before the packet arrive to the STUN service and that the response back from the STUN service won't be from 10.0.0.1 but rather the 1.2.3.4 used when contacting the service?

Further understanding of SPF, DKIM, and DMARC

itemprop="text"> I've been trying to wrap my head around some of the information I've gathered online, and I was hoping for some clarification. We are using Office 365, for our email server. A.) Are SPF records and DKIM actually doing anything on their own, if DMARC is not enabled or set to p=none ? B.) SPF Record dictates which Email servers, IP's or external domains are allowed to send email as our domain, correct? For example, we could authorize gmail.com or yahoo.com to send email as our domain with an SPF record? or When someone tries to send an email, from anywhere in the world as user@ourdomain.com , it would be our SPF record that checks the email was sent from a valid source, right? C.) According to my DMARC Reports, the only IP address that fails DKIM, is our internal corpor

Further understanding of SPF, DKIM, and DMARC

I've been trying to wrap my head around some of the information I've gathered online, and I was hoping for some clarification. We are using Office 365, for our email server. A.) Are SPF records and DKIM actually doing anything on their own, if DMARC is not enabled or set to p=none ? B.) SPF Record dictates which Email servers, IP's or external domains are allowed to send email as our domain, correct? For example, we could authorize gmail.com or yahoo.com to send email as our domain with an SPF record? or When someone tries to send an email, from anywhere in the world as user@ourdomain.com , it would be our SPF record that checks the email was sent from a valid source, right? C.) According to my DMARC Reports, the only IP address that fails DKIM, is our internal corporate IP, why might this be? Everything external passes DKIM. *Emails sent internally, from employee to employee, the header states DKIM=none *I can't seem to find any emails that say DKIM=fail. D.) DKIM soun

networking - Internet connectivity with Domain Controller

itemprop="text"> We have a Windows 2008 R2 Domain controller and every PC on the network has the IP address of domain controller set under DNS settings. Is it normal to have the Domain controller also be the DNS server and lose all internet connectivity to all of the computers on the network when the domain controller reboots? I understand that DNS is a core part of group policy, but how can I make it so all the machines in my network don't lose connectivity when the Domain Controller goes down? Do people normally have a backup/mirrored DNS or something? Sorry I'm self-taught and this is how the existing network was setup, before my time. Answer Two Domain Controllers. both also DNS servers, clients configured to use both for DNS. That is how it's done.

networking - Internet connectivity with Domain Controller

We have a Windows 2008 R2 Domain controller and every PC on the network has the IP address of domain controller set under DNS settings. Is it normal to have the Domain controller also be the DNS server and lose all internet connectivity to all of the computers on the network when the domain controller reboots? I understand that DNS is a core part of group policy, but how can I make it so all the machines in my network don't lose connectivity when the Domain Controller goes down? Do people normally have a backup/mirrored DNS or something? Sorry I'm self-taught and this is how the existing network was setup, before my time. Answer Two Domain Controllers. both also DNS servers, clients configured to use both for DNS. That is how it's done.

linux - How to find biggest (in entries, not size) ext4 directory?

Ubuntu 10.04.3 LTS x86_64, I am seeing the following in /var/log/messages : EXT4-fs warning (device sda3): ext4_dx_add_entry: Directory index full! Relevant info from dumpe2fs: Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Free blocks: 165479247 Free inodes: 454382328 Block size: 2048 Inode size: 256 I've already read some other questions, such as href="https://serverfault.com/questions/312445/ext3-dx-add-entry-directory-index-full">ext3_dx_add_entry: Directory index full and href="https://serverfault.com/questions/183821/rm-on-a-directory-with-millions-of-files">rm on a directory with millions of files

linux - How to find biggest (in entries, not size) ext4 directory?

Ubuntu 10.04.3 LTS x86_64, I am seeing the following in /var/log/messages : EXT4-fs warning (device sda3): ext4_dx_add_entry: Directory index full! Relevant info from dumpe2fs: Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Free blocks: 165479247 Free inodes: 454382328 Block size: 2048 Inode size: 256 I've already read some other questions, such as ext3_dx_add_entry: Directory index full and rm on a directory with millions of files ; those made me think that there must be a directory with a big number of items in it somewhere. Since it is a rather complex directory organization I have a basic problem: how can I find the directory which is generating those messages?

hard drive - Strange file corruptions in ext4

Recently I came in contact with what appears to be disk corruption scenarios, and I would like to understand them better. I have a build server which I work with daily. During one full build of a recent LLVM release which stopped with a strange error message, I got this excerpt for one generated file ( X86GenDisassemblerTables.inc ): ... /* 0xa5 */ { /* ModRMDecision */ MODRM_ONEENTRY, 0 /* EmptyTable */ }, /* 0xa6 */ { /* ModRMDecision */ MODÒM_ONEENTRY, # Ò = 0xD2 0 /* EmptyTable */ # R = 0x52 }, /* 0xa7 */ { /* ModRMDecision */ MODRM_ONEENTRY, 0 /* EmptyTable */ }, ... This seems to be a single-bit file corruption. I removed the file, the build generated it again and completed successfully. And today, in a different machine, this .d file was produced during a build: output-gcc-8.2.0-x86_

hard drive - Strange file corruptions in ext4

Recently I came in contact with what appears to be disk corruption scenarios, and I would like to understand them better. I have a build server which I work with daily. During one full build of a recent LLVM release which stopped with a strange error message, I got this excerpt for one generated file ( X86GenDisassemblerTables.inc ): ... /* 0xa5 */ { /* ModRMDecision */ MODRM_ONEENTRY, 0 /* EmptyTable */ }, /* 0xa6 */ { /* ModRMDecision */ MODÒM_ONEENTRY, # Ò = 0xD2 0 /* EmptyTable */ # R = 0x52 }, /* 0xa7 */ { /* ModRMDecision */ MODRM_ONEENTRY, 0 /* EmptyTable */ }, ... This seems to be a single-bit file corruption. I removed the file, the build generated it again and completed successfully. And today, in a different machine, this .d file was produced during a build: output-gcc-8.2.0-x86_64-linux-gnu/obj/headers.hpp.gch: src/headers.hpp pp # What's this? Everything else -- file size, permissions, even the terminating newline -- was in place

kvm virtualization - How to expand EXT4 volume in a centos KVM guest after resizing the guest's LV

I have a fedora KVM host with all the centos VMs in their own LVs. I'd like to expand an EXT4 home volume on a guest that resides on a LV called "thelogicalvolume" in volume group "thevolumegroup"... On the host I have run: # sudo lvextend -L +1T thevolumegroup/thelogicalvolume which results with: Size of logical volume thevolumegroup/thelogicalvolume changed from 500.00 GiB (128000 extents) to <1.49 TiB (390144 extents). Logical volume thevolumegroup/thelogicalvolume successfully resized. Then... In the guest i tried: # sudo resize2fs /dev/mapper/centos-home Which results in: resize2fs 1.42.9 (28-Dec-2013) The filesystem is already 116684800 blocks long. Nothing to do! # df on the VM returns: Filesystem 1K-blocks Used Available Use% M

kvm virtualization - How to expand EXT4 volume in a centos KVM guest after resizing the guest's LV

I have a fedora KVM host with all the centos VMs in their own LVs. I'd like to expand an EXT4 home volume on a guest that resides on a LV called "thelogicalvolume" in volume group "thevolumegroup"... On the host I have run: # sudo lvextend -L +1T thevolumegroup/thelogicalvolume which results with: Size of logical volume thevolumegroup/thelogicalvolume changed from 500.00 GiB (128000 extents) to <1.49 TiB (390144 extents). Logical volume thevolumegroup/thelogicalvolume successfully resized. Then... In the guest i tried: # sudo resize2fs /dev/mapper/centos-home Which results in: resize2fs 1.42.9 (28-Dec-2013) The filesystem is already 116684800 blocks long. Nothing to do! # df on the VM returns: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/centos-root 52403200 1316820 51086380 3% / devtmpfs 1928348 0 1928348 0% /dev tmpfs 1940276 0 1940276 0% /dev/shm tmpfs

What Makes Cloud Storage (Amazon AWS, Microsoft Azure, google Apps) different from Traditional Data center storage networking (SAN and NAS)?

itemprop="text"> Some confusion because of my question so to make it simple : "What kind of storage do big cloud providers use and why?" As far as i understand, however I am not able to find any kind of official Storage networking differences between typical data centers and clouds, all cloud providers are using DAS different from the typical data centers. Even DAS has many disadvantages than SAN or NAS, i want to learn the details why clouds using DAS either for storage or application purposes. Any resource or description will be appreciated to make me clear. EDIT: While reading the paper "Networking Challenges and Resultant Approaches for Large Scale Cloud Construction,David Bernstein and Erik Ludvigson (Cisco)" they mention that, Curiously we do not see Clouds from the maj

What Makes Cloud Storage (Amazon AWS, Microsoft Azure, google Apps) different from Traditional Data center storage networking (SAN and NAS)?

Some confusion because of my question so to make it simple : "What kind of storage do big cloud providers use and why?" As far as i understand, however I am not able to find any kind of official Storage networking differences between typical data centers and clouds, all cloud providers are using DAS different from the typical data centers. Even DAS has many disadvantages than SAN or NAS, i want to learn the details why clouds using DAS either for storage or application purposes. Any resource or description will be appreciated to make me clear. EDIT: While reading the paper "Networking Challenges and Resultant Approaches for Large Scale Cloud Construction,David Bernstein and Erik Ludvigson (Cisco)" they mention that, Curiously we do not see Clouds from the major providers using NAS or SAN. The typical Cloud architecture uses DAS, which is not typical of Datacenter storages approaches. But here there is a conflict: in my opinion and also stated later in the paper,

On IPv6 linux router, autoconf and accept router advertisements for single interface

Apparently right now if you have /proc/sys/net/ipv6/conf/all/forwarding set to a value of 1 that completely disables the auto configuration of Interfaces and routes, but I have a system with one interface that I want to dynamically configure the address. I have a Linux box with multiple interfaces acting as a router with href="http://www.lartc.org/lartc.html#LARTC.RPDB.MULTIPLE-LINKS" rel="nofollow noreferrer">multiple wan connections . On the IPv4 side I am using multiple route tables and ip rules to direct traffic to separate uplinks. My primary wan connection has static IPv6 address that are permanently assigned to my connection. The backup connection is basically a cheap broadband connection, and I have no static addresses IPv6, or IPv4. I can see via radvdump that my provider of my cheap broadband back

On IPv6 linux router, autoconf and accept router advertisements for single interface

Apparently right now if you have /proc/sys/net/ipv6/conf/all/forwarding set to a value of 1 that completely disables the auto configuration of Interfaces and routes, but I have a system with one interface that I want to dynamically configure the address. I have a Linux box with multiple interfaces acting as a router with multiple wan connections . On the IPv4 side I am using multiple route tables and ip rules to direct traffic to separate uplinks. My primary wan connection has static IPv6 address that are permanently assigned to my connection. The backup connection is basically a cheap broadband connection, and I have no static addresses IPv6, or IPv4. I can see via radvdump that my provider of my cheap broadband backup link that they are now sending out IPv6 router advertisements. on that link. Since my box is a router and has forwarding enabled, how do I dynamically configured the address on this link? Is there any way to have my system accept router advertisements configure

accounting - How can I enable pid and ppid fields in psacct dump-acct?

itemprop="text"> I am currently using the psacct package on Centos to perform accounting on processes run by users. The info file href="http://www.gnu.org/software/acct/manual/html_chapter/accounting_7.html#SEC24" rel="nofollow noreferrer">1 suggests that it is possible to output pid and ppid depending on what information your operating system provides in it's struct acct. pid and ppid are listed in /usr/include/linux/acct.h on my system: struct acct_v3 { char ac_flag; /* Flags */ char ac_version; /* Always set to ACCT_VERSION */ __u16 ac_tty; /* Control Terminal */ __u32 ac_exitcode; /* Exitcode */ __u32 ac_uid; /* Real User ID */ __u32 ac_gid; /* Real Group ID */ __u32 ac_pid; /* Process ID */ __u32 ac_ppid; /* Parent Process ID */ ... But pid and

accounting - How can I enable pid and ppid fields in psacct dump-acct?

I am currently using the psacct package on Centos to perform accounting on processes run by users. The info file 1 suggests that it is possible to output pid and ppid depending on what information your operating system provides in it's struct acct. pid and ppid are listed in /usr/include/linux/acct.h on my system: struct acct_v3 { char ac_flag; /* Flags */ char ac_version; /* Always set to ACCT_VERSION */ __u16 ac_tty; /* Control Terminal */ __u32 ac_exitcode; /* Exitcode */ __u32 ac_uid; /* Real User ID */ __u32 ac_gid; /* Real Group ID */ __u32 ac_pid; /* Process ID */ __u32 ac_ppid; /* Parent Process ID */ ... But pid and ppid are not output when I run dump-acct: # dump-acct /var/account/pacct.1 | tail awk

my.cnf parameters to enable binary logging mySQL 4.1.20

itemprop="text"> I'm having problems enabling binary logging on mySQL 4.1.20 After adding log-bin=/var/log/mysql/tts_db to my.cnf, mysql fails to restart, with the following error in mysqld.log: 091112 03:36:37 mysqld started /usr/libexec/mysqld: File '/var/log/mysql/tts_db.000001' not found (Errcode: 13) 091112 3:36:37 [ERROR] Could not use /var/log/mysql/tts_db for logging (error 13). Turning logging off for the whole duration of the MySQL server process. To turn it on again: fix the cause, shutdown the MySQL server and restart it. 091112 3:36:37 [ERROR] Aborting 091112 3:36:37 [Note] /usr/libexec/mysqld: Shutdown complete 091112 03:36:37 mysqld ended Whilst looking at it to ask this question, I may have stumbled on the answer, but I'll check anyway - I can't restart the server until to

my.cnf parameters to enable binary logging mySQL 4.1.20

I'm having problems enabling binary logging on mySQL 4.1.20 After adding log-bin=/var/log/mysql/tts_db to my.cnf, mysql fails to restart, with the following error in mysqld.log: 091112 03:36:37 mysqld started /usr/libexec/mysqld: File '/var/log/mysql/tts_db.000001' not found (Errcode: 13) 091112 3:36:37 [ERROR] Could not use /var/log/mysql/tts_db for logging (error 13). Turning logging off for the whole duration of the MySQL server process. To turn it on again: fix the cause, shutdown the MySQL server and restart it. 091112 3:36:37 [ERROR] Aborting 091112 3:36:37 [Note] /usr/libexec/mysqld: Shutdown complete 091112 03:36:37 mysqld ended Whilst looking at it to ask this question, I may have stumbled on the answer, but I'll check anyway - I can't restart the server until tomorrow morning. The mysql directory (/var/log/mysql) is owned by root. Is this problem because the mysql user that the server runs as doesn't have the correct privileges for creating a file

apache 2.2 - How Would I Restrict a Linux Binary to a Limited Amount of RAM?

I would like to be able to limit an installed binary to only be able to use up to a certain amount of RAM. I don't want it to get killed if it exceeds it, only that that would be the max amount that it could use. I would like the process to die once it reaches a certain amount of RAM, preferably before the server starts to swap heavily. The problem I am facing is that I am running an Apache 2.2 server with PHP and some custom code that a developer is writing for us. The problem is that somewhere in there code they launch a PHP exec call that launches ImageMagick's 'convert' to create a resized image file. I'm not privy to a lot of details to the project or the code, but need to find a solution to keep them from killing the server until they can find a way to optimize the code. I had thought that I could do this with

apache 2.2 - How Would I Restrict a Linux Binary to a Limited Amount of RAM?

I would like to be able to limit an installed binary to only be able to use up to a certain amount of RAM. I don't want it to get killed if it exceeds it, only that that would be the max amount that it could use. I would like the process to die once it reaches a certain amount of RAM, preferably before the server starts to swap heavily. The problem I am facing is that I am running an Apache 2.2 server with PHP and some custom code that a developer is writing for us. The problem is that somewhere in there code they launch a PHP exec call that launches ImageMagick's 'convert' to create a resized image file. I'm not privy to a lot of details to the project or the code, but need to find a solution to keep them from killing the server until they can find a way to optimize the code. I had thought that I could do this with /etc/security/limits.conf and setting a limit on the apache user, but it seems to have no effect. This is what I used: www-data hard as

CentOS system occasional has issues with permissions/mysql service

itemprop="text"> Twice now in the last 4 days at some point during the night websites go down because the server is unable to connect to the database. At this point everything else is still running (apache ect) just the database is dead. When I log into ssh with root to investigate, I have read only permissions everywhere which is what I suspect is the cause of the mysql server dying. I've checked the mysql logs, system logs, basically every log file I can find, nothing indicates an error anywhere around when the problems starts (or even during the entire day). It is like a switch is just flipped, I then restart the system and things are fine again... until a few days later? There was 2G of free ram the last time this happened, 1.5G free the first time. Minimal cpu usage (< 30%). Any ideas? Answer D

CentOS system occasional has issues with permissions/mysql service

Twice now in the last 4 days at some point during the night websites go down because the server is unable to connect to the database. At this point everything else is still running (apache ect) just the database is dead. When I log into ssh with root to investigate, I have read only permissions everywhere which is what I suspect is the cause of the mysql server dying. I've checked the mysql logs, system logs, basically every log file I can find, nothing indicates an error anywhere around when the problems starts (or even during the entire day). It is like a switch is just flipped, I then restart the system and things are fine again... until a few days later? There was 2G of free ram the last time this happened, 1.5G free the first time. Minimal cpu usage (< 30%). Any ideas? Answer Disk errors are one possible cause of the "I have read only permissions everywhere" condition. Some types of hardware or kernel-level disk faults can lead to an inconsistent and

Lost Permission on Files using wrong chmod syntax Centos 5.5

itemprop="text"> I was trying to remove write permissions on an entire directory, and I used the incorrect command: chmod 644 -r sites/default I meant to type chmod -R 644 sites/default The result was this: chmod: cannot access `644': No such file or directory $ ls -als sites total 24 4 drwxr-xr-x 5 user group 4096 Jan 11 10:54 . 4 drwxrwxr-x 14 user group 4096 Jan 11 10:11 .. 4 drwxr-xr-x 4 user group 4096 Jan 5 01:25 all 4 d-w------- 3 user group 4096 Jan 11 10:43 default 4 -rw-r--r-- 1 user group 1849 Apr 15 2010 example.sites.php I fixed the permissions on the default folder with $ chmod 644 sites/default But, the following ls shows a all the files with red backgrounds and question marks. I can't access any files unless I am root. $ ls -als sites/d