mm

Written By

Matt Glenn

mm

 

Written By

Matt Glenn

Share

Subscribe

Stay up-to-date with OST blog posts.

March 7, 2017

This article is part of a Managed Services Security Series. Check out Part I to learn more about Ransomware and how it affects your organization.

OST Managed Services has an inside view of many security situations, that allow us to see what security design lapses hackers take advantage of the most, and how to protect against them. In the last article, I talked about one of the most significant impacts to an organization we see right now, which is ransomware. This article focuses on what we tend to see as the next most impactful security issue in environments, which is neglecting to think about basic security issues when implementing a new application or system, implementation projects that didn’t quite finish, and security cleanup tasks don’t happen. It’s up to the designer, and the implementers of a system to ensure common sense security measures are put in place that will protect the system years down the road.

Disclaimer:  This article is not intended to be your complete guide to designing secure applications and systems. It merely points out a few of the most common problems that we have seen bite people down the road.

Hey can I have that server list spreadsheet?

Ok, we’ve all done it.  New systems are being built, a new application written, and someone organizes all the IPs, server names, services, api endpoints, etc. on a spreadsheet to share with the project team. Also on that spreadsheet: the userids and passwords. During a project when things are being built, before that environment holds data or grants access to anything else, this is an ok practice. That is assuming every one of those passwords is changed, and only documented in a secure password management tool such as PasswordState or CyberArk, before the system transitions to production. Once that system houses data, these accounts should have long secure passwords, non-standard account names where possible, and the password never exchanged over a insecure non-controlled medium (never email, write them down, give them verbally, or store in a file, leave it in the tool).

The story… Recently someone on the team was asked to help with a network issue at a mid-sized business. Their main network admin had left, and there were a few devices that we couldn’t get passwords for. When we are in these situations, we have to think like hackers, and try to get things running again. So, first step, put the names of the network devices and the name of the company into google. Yes, “google it”. On page 2 of the search results, there was an interesting untitled page that was a cached search result of a copy of a support forum post from this admin. In that post, there was a link to a dropbox folder that indicated it had a copy of the configs of some devices the forum was helping troubleshoot.  Navigate that link, and there’s some config files from 3 years ago for one of the devices. Download…. Done. In that file there was an encrypted password for the “admin” account on one of the switches. This encryption type is super easy to decode. We copy/pasted that to a website that decodes them for you, and we had a password. This password actually worked on that switch, so we tried other devices (including the firewall, which had ssh open to the outside no less), and by golly, it was the same password for all the devices.

“Effectively, for three years, the front door to their network (the main firewall), was open to anyone with basic Google skills.”

Lets just put the userID/password in there and we will fix it later

When figuring out a complex project getting two systems to talk together, particularly when talking to a less-than-well-documented API, we are struggling to try to get the thing to talk and work.  So most of the time, we temporarily throw security best practices out the window, just to try to get the thing to work. Then the idea is we will gradually tighten it back up, once we have something actually talking. The problem is, we run out of time, and forget to clean some of those things up. Examples are having a username/password in the URL for an API call, username and password in a clear text script on a server, LDAP calls for authentication not running over SSL, or a non-HTTPS website with a login prompt. Doing this in the beginning of a project, to try and get some small win figuring out the issue is almost a necessity, but we also need to add tasks into the project to secure them back up once we have things working.

The story… Recently we were working an issue where there were some strange happenings in a customer ticketing system, items were being removed, and some strange inconsistency issues were being seen. Network logging showed that there was a much higher amount of accesses from the outside over the previous couple of weeks. All the changes were being made by “admin”, so it was assumed that somehow that account was compromised. So we reset that password, and the activity stopped. However, the next day we got calls about people not being able to submit tickets. Weird. We found out that many end users at this company use an unauthenticated web form to submit a ticket, because they do not themselves have an AD account. The form was a simple HTML form with some javascript, which we opened, and saw the issue. The form does a post to a REST API, sending a JSON with the ticket data. The URL for this post was “http:www.server.com/api?username=admin&password=thepassword”.  You don’t need to know anything other than the previous sentence to see the problem.

Someone made this form, got it working with the admin account and password, but never went and changed the password to an api key before putting it in production, then proceeded to broadcast the admin credentials to the public internet.

Why would anyone on the outside care about a ticketing system? Read the previous story for a clue (and remember the last time you saw a username and password in a ticket).

Everybody gets a VPN connection!

When putting in a new system, it’s critical to understand (and test) who actually has access. One common theme I see is a new system is put in place and either allows in all authenticated users (just prompts for a user and password, but doesn’t require you to be in any particular groups or roles). Or the system inherits a user list from groups nested in groups nested in groups from years of Active Directory management sprawl. This is especially common in two places: systems that are non-windows attempting to use an Active Directory domain to authenticate, usually over LDAP, and a Windows based system that uses existing groups already defined in the domain (which due to inheritance, may effectively mean Domain Users). In the first scenario, especially when its LDAP, make sure the application requires membership in a group, so that admins can control who actually has access (even if that is everyone at first). For the second scenario, check the groups you have been given, and see what the total list of members is after all nested groups are added together. Then verify with the project owner if that’s really what they want.

The Story… In a customer environment a new VPN appliance and firewall was implemented.  At one point when investigating a problem, we noticed a bunch of users were VPNed in to the environment. That’s odd, there should only be a couple dozen, why is there a ton more than that? We looked into the config, which used Active Directory LDAP authentication to authenticate users, and clearly saw the issue. While it indeed required all users to authenticate via active directory, it did not limit VPN connectivity by group membership. So anyone with an AD account could get in. Is that really a problem? Yes, because this domain also housed external customer accounts, which you could also request to create via a registration form on the internet.

So to get access to the VPN, you only needed to submit a registration request (which simply created a user with guest access until approved), and because there was no restrictions, you were allowed in to the VPN if you knew how to connect.

The issue is that during the implementation project, the tasks to finish the group membership configuration fell to the end of the project, which was cut off to work on other tasks.

Ask yourself three questions

When you’re working on a project to increase the security of the system being implemented and want avoid these issues, just ask yourself three questions:

  • Have any of the service account or admin/root passwords been shared, or are stored somewhere insecure?
  • Do any API’s, URL’s, scripts, etc. store or expose any credentials that could be read by someone with lower privileges?
  • Do we actually know WHO has access to this new system?

 

Now these questions are not the end-all-be-all guide to security. In fact, they barely scratch the surface. They simply represent the most common mistakes that we see made in the rush to finish a project that a hacker has actually taken advantage of in real life situations. There are plenty of mistakes that are completely valid, that just are not taken advantage of often (yet).

Share

Subscribe

Stay up-to-date with OST blog posts.

About the Author

Matt Glenn has been with OST for over a decade. As the Practice Manager for Managed Services, Matt has grown the team to more than 50 professionals and has built additional business units within OST from the ground up. He holds a variety of certifications from AWS, Microsoft, Google and other technology partners, and his career has included positions at IBM and Whirlpool managing security teams and efforts around the globe. Outside of OST, you can find Matt on mountain climbing adventures, kayaking or down in his basement playing at his electronics workbench on the next maker project.