Skip to main content

linux - How to provide external access to only the Git projects (clone/pull/push) of an internal GitLab deployment



We have set-up a GitLab server (GitLab 7.0 Community Edition).



It is up and running and our colleagues can use it within the LAN (the IP address and Host are only visible from the LAN).




Some of the projects hosted on this GitLab instance should be "shared" with external users (not part of our company). We would like to let them access the Git repositories in order to be able to clone, pull and push.



The GitLab server will stay within the LAN. But we can setup a server in our DMZ which could reverse-proxy (or some other alternatives) the GitLab server. We would like however that only the ".git" URLs are accessible via HTTPS (so not give access to the GitLab WUI (Web User interface)).



How can we set-up the "reverse-proxy" in the DMZ to provide access for external users (on the internet) to our internal Git repositories hosted on GitLab?



Wishes:




  • Only https://*/*.git/* URLs should be allowed externally;


  • HTTP basic authentication on the reverse proxy would be a plus;

  • But GitLab authentication mechanism over HTTPS shall remain;

  • Local user on our LAN should still be able to use SSH for Git operations;

  • GitLab Web UI should not be accessible externally;



Note: we do have already a server in our DMZ with NGINX running. If we can use this "software stack" to do the reverse-proxying, that would be great.



Note2: this question already had a bounty of 100 which has expired and the points were lost. If I get an answer which solve my problem, I will open a bounty and reward the answer with it.


Answer




Did you try the obvious naive solution?




server {

[...ssl and servername stuff...]

location / {
# fake the hostname to the hostname that gitlab expects
#

proxy_set_header Host hostname-for-gitlabhost;
proxy_pass https://internal-gitlab-instance;
proxy_read_timeout 90;
}
}


Additionally, you might set the location to something that allows only the

https://...../*.git
URLs

This should work:




Instead of

location /
above, something like:


location ~ ^/(.*\.git) {
proxy_set_header Host hostname-for-gitlabhost;
proxy_pass https://internal-gitlab-instance/$1;
proxy_read_timeout 90;
}



This captures the request-uri and adds it to the proxy call withing the location statement.



I am not really sure if this works, just typed it from the top of my head.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...