Skip to main content

Active Directory Site/Domain Interactions



I'm having an issue wrapping my head around the myriad Active Directory components, and I'm hoping I can get some opinions or corrections from someone.




At our workplace, we used to have individual active directory domains and sites for each different office location we have. Each location also had a pair of domain controllers that was responsible for that location, and would be used by any stations within that location to manage authentication, GPOs, etc.



Recently, as part of another project, we condensed down all of our domains into our old top level domain. The sites in A/D remained the same, but each individual user account, computer, etc. was moved into the top level domain. Each pair of local DC's was deprecated to just one DC that was a member of the top level domain.



After finishing this, we ran into issues where we were getting -very- slow replication between DCs. The secondary issue of that was individual stations or users seemed to be authenticating against any DC they felt like. On a single station, I found that the user authenticated against one, was receiving DNS from another, and pulling GPOs from somewhere else (I may be mixing this up but you get the idea). It seemed to be essentially random which DC would be contacted by a given station, even though all the DCs were still a member of the proper A/D "sites".



To remedy this problem, we made all the DC's members of the same "top level" site so that replication would essentially be instantaneous. It may sound kind of icky to have that many DCs perpetually replicating to each other, but we run a fairly small setup so there wasn't any major concerns.



My question is, did we go wrong somewhere? I'm currently working on an SCCM install, and only now am I finding out that this is (to put it lightly) not Microsoft's recommended way to do things. My main concerns are:




1) Is this going to bite us later down the road, especially with us being in the midst of trying to set up a stable SCCM install.



2) Can anyone explain why the DC's were getting hit seemingly at random by authentication requests, even though we had them in their corresponding A/D sites (which to the best of my knowledge should give them priority for local requests from stations within the same site).



Thanks!


Answer



There is a bit too much to really handle here in a single question.



First, don't start setting up SCCM without getting AD working correctly. Yes, it will bite you later down the road if you don't fix AD first.




Second, Moving your DC's all to the same site is not a good start. Move the DC's back to their respective sites in AD, make sure your subnets are defined and assigned correctly. If you want to manage replicate, take a look at defining specific site connectors to meet your needs. Take a look at fixing the replication issues before doing something drastic (dcdiag/etc to troubleshoot). Make sure your DNS is clean and working. Take a look at your DHCP settings to make sure the proper DNS servers are getting assigned to the workstations. Make sure your DC's are set with the correct DNS servers.



If you want to go beyond this, find a reputable vendor to work with your to get your environment right.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

ubuntu - Monitoring CPU, Mem, disk, on a single server

I've been looking for a simple starter solution for monitoring my [currently] single server hosted solution. Other than Nagios and similar, are there other good (simple) solutions people are using? Answer Everything depends on what you want. For example Munin is very simple, you can install and configure it in less then 10 minutes (on one server), it can sends alarms, make graphs from monitoring cpu, mem. apache connections, eaccellerator, disk io and many many more (it has many plugins). But if you are planning in future get some more machines, munin may not be enough. For example in munin you cant monitor state of individual processes, can't monitor changes in files (for security purpose). So if you wanna only see what is the utilization of basics parameters on your server and don't plan to buy some more servers Munin is what you are looking for, but if you wanna be alarmed when some of your service is down, take more control on what is happeninig on...

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...