[torqueusers] [torquedev] TORQUE authorization security vulnerability
knielson at adaptivecomputing.com
Thu Aug 11 08:49:11 MDT 2011
I think you understand the problem correctly. There is a vulnerability but a properly configured cluster does not have this problem. TORQUE runs under the assumption that any host in the cluster is secure. It only authorizes access. It does not do authentication. TORQUE relies on the integrity of the cluster. Even so we do have MUNGE and it looks like there are some GSSAPI implementations out there we want to get access to and make available to the community.
----- Original Message -----
> From: "Michael Jennings" <mej at lbl.gov>
> To: torqueusers at supercluster.org
> Sent: Wednesday, August 10, 2011 1:15:52 PM
> Subject: Re: [torqueusers] [torquedev] TORQUE authorization security vulnerability
> On Tuesday, 09 August 2011, at 17:00:41 (-0600),
> Ken Nielson wrote:
> > Here is the algorithm for the vulnerability. The work around is
> > pretty easy. Let us know if you have any comments.
> Clearly I'm missing something here.
> Sure, "privileged port" trust is only viable in certain
> carefully-firewalled and methodically-engineered scenarios. We
> learned that back in the mid-90's with NFS and RSH. Ditto for
> remotely-supplied data (including remote user identity).
> It seems to me that anyone who's seen an error message pop up with
> "ruserok()" in it already ought to know that very lax authentication
> and authorization is taking place. But TORQUE is only one of several
> such services in a clustered environment.
> It's not clear how any properly-managed system (read: firewalled
> and/or access controlled) would be vulnerable to this sort of attack.
> If you have root on an external system, you shouldn't be able to
> connect to the scheduler port anyway, so no dice. If you are a
> regular user on the internal system, you can't open a privileged port
> (and can probably already qsub anyway), so no dice.
> The only issue comes if someone gains root on an internal system. If
> that happens, quite frankly, submitting jobs to the scheduler will be
> the least of my worries.
> So what am I missing? :-)
> Michael Jennings <mej at lbl.gov>
> Linux Systems and Cluster Engineer
> High-Performance Computing Services
> Bldg 50B-3209E W: 510-495-2687
> MS 050C-3396 F: 510-486-8615
> torqueusers mailing list
> torqueusers at supercluster.org
More information about the torqueusers