Password Not Required: Better Security with Less Friction


Password Not Required: Better Security with Less Friction

Written by SOS Tech Group on . Posted in Blog

Everyone knows the frustration of passwords. Creating passwords, resetting them and administering them is a time-consuming process, and 59% of people use the same password everywhere. Passwords change too frequently, and some folks resort to writing them down on Post-It notes. Maybe they don’t change enough, and the same password has been in use for months, maybe years. Over time, they’ve become so dangerous that two-factor authentication, a password plus almost anything else, is basically an industry standard in 2019. 

Passwords also create the largest attack surface for any organization. In 2017, 81% of all hacking incidents were due to weak or stolen passwords. With statistics like that, it seems that passwords alone aren’t enough to protect any data worth protecting. With these endless loops of password related breaches and attacks, attention has finally been brought to the possibility of going ‘passwordless’.

Stated simply, passwordless authentication requires any other verification of legitimate access to data, except for a password. Things like a registered smart phone, fingerprints or voice, questions unique to a user, quite literally anything but a password qualifies. 

The main benefit of password-free authentication is of course security. Without a password, there’s nothing to scam, or phish or steal.  An interesting side effect of going passwordless is that not only is security improved, but the user experience is improved as well. No need to memorize specific text strings that you are likely to reuse across multiple platforms, no more calls to tech support to have a local password reset, & no more waiting for ‘forgotten password’ emails to wait for. 

Reducing ‘friction’, or the time taken to complete a set task, is also an important benefit. Roughly 33% of all online shopping transactions are abandoned because a user simply forgot their password to a retailer’s website. The process of creating a new account, with user name and password, which is likely re-used from another application, is also a type of friction. Once friction is reduced, the process can move forward as efficiently as possible. In the case of password-free verification, not only is it easier, but it is also more secure.

Naturally, there will be hesitance to adopt complete passwordless authentication. It requires a solid IT foundation for any organization, and it does require end-user buy-in and confidence. Password-less authentication would allow for more users to adopt a service or application as they would be able to access a system with more security and minimal friction. In turn, this leads to an increase in end-user adoption or, in the retail example mentioned before, new customer acquisition. Remembering complex passwords that change every so often is a challenge.

As IT progresses, people are getting more and more frustrated with passwords. This approach could prove itself to be efficient and more secure for almost any organization. Implementing a password-less authentication framework spares time and eliminates the disappointment of recollecting yet another password, while increasing security and user confidence.

Federal Ban on Certain Surveillance Cameras: What Does It Mean?

Written by SOS Tech Group on . Posted in Security

risk of theft

The National Defense Authorization Act for Fiscal Year 2019 recently went into effect, after being passed by the US Senate on August 1st, 2019. What’s unique about this year’s NDAA is that it includes a usage & purchase ban for Dahua and Hikvision IP surveillance cameras by the United States government and Federally-funded projects, with a focus on ‘critical infrastructure’ and ‘national security purposes’.

Citing this very severe security risk, the 2019 NDAA goes so far as to disallow any company that provides these cameras, even to private enterprise, from doing any business with the federal government. Penalties for violation of this Act include contract and/or procurement cancellation, disqualifying the contractor from future federal business, and serious violations can be referred to “appropriate criminal investigative agencies”.

The concern with these cameras is that both Dahua and Hikvision are majority-owned by the government of the People’s Republic of China, and these devices have long been suspected of containing hidden ‘backdoors’, allowing unauthorized access to any network they reside on, from anywhere on the Internet. Hikvision alone is 42% owned by the Chinese government. The US Department of Homeland Security previously issued a security advisory notice on Dahua cameras back in 2017, when a credential exploit had been found in the source code for Dahua digital video recorders and IP cameras. In line with the vulnerability of these devices on a network, it was reported that another exploit in Dahua cameras allowed unauthenticated access to their audio stream, effectively allowing a ‘wiretap’-type feature.

Many authorities are now under pressure to switch to alternative systems. One unnamed security company said that more than a dozen federal agencies had approached it for advice, although about half a dozen of which were working to replace the cameras. Hospitals, local governments and sensitive businesses, such as banks and critical infrastructure companies, had sought similar help, the company said. In 2018, Hikvision’s U.S. sales fell for the first time. Its share price has slumped 20 percent since the NDAA was announced.

Removing these cameras from US Government infrastructure may not be as easy as it seems. Hikvision and Dahua cameras control roughly 33% of the global video surveillance market alone, and are often at low price points, allowing for easy adoption in places like military bases & government offices that require a large camera footprint. What’s worse, many federal agencies likely do not know which manufacturers’ cameras they’ve even purchased, in part due to ‘whitelabelling’, a practice where an installer or reseller can put their own brand on Chinese-made equipment. Effectively, two cameras running identical Hikvision firmware could carry totally different labels and packaging, making it nearly impossible to identify all these cameras, let alone remove them.

While the NDAA only applies to Government purchases of these cameras, private enterprise should be just as vigilant, as these devices on a network greatly increase the attack surface for all types of malware, ransomware, and viruses. It is important to replace these cameras immediately, once identified. SOS can help, contact us today.

End of Days: Windows 7 and Server 2008 Head Off into the Sunset…

Written by SOS Tech Group on . Posted in IT News, Technology

All good things must end, and PC operating systems are no exception. One of the most popular versions in Microsoft’s history for consumers, Windows 7, is calling it quits after almost 10 years in service.

Released to the general public on Oct 22nd, 2009, Windows 7 would sell 100 million copies worldwide in just six months. Until the final cutoff date of Jan 14th, 2020, Windows 7 PCs are relatively safe to use. Even after the cutoff date, these PCs will still function, but will become ripe targets for bad actors spreading malware and viruses, as Microsoft will be ending all security updates and patching for all iterations of Windows 7. It won’t be long before Windows 7 machines are wide open to potential attacks, and they will stay that way. Put simply, after January 2020, criminals can constantly hit your organization with phishing emails and malware and all it would take to shut down your entire network, and potentially your entire business, is one person in your company on Windows 7 to click on a wrong email or link.

Also announced was the end of life for Windows Server 2008, which runs the backbone of many businesses around the globe. Windows Server OS makes up about 70% of all server installations, and Server 2008 is roughly 40% of those. A huge drawback to sticking with hardware and going to Server 2016 or 2019 (since they have the longest support cycle currently), is that this requires a double migration: from 2008 to 2012, then from 2012 on to 2016 or 2019. That’s a lot of man hours, and a lot of places for data to get lost. Since the effort to migrate an existing, aging-in-place server to a brand new, also aging-in-place server is almost identical to moving on premise hardware to totally cloud hosted servers, many organizations are making the leap to the Cloud now, to take advantage of the forced change, and breaking the cycle of hardware dependency, to ensure their companies don’t revisit this issue every 6 to 7 years at the earliest.


Of course, we human beings are heavily resistant to change. However, it is unavoidable. In this case, software and hardware will move on towards the future, with or without our input and adaptation. As of this writing, there are only 4 months left to prepare, so time is of the essence. If you have any questions at all regarding the Windows 7 and Server 2008 end of life, reach out to SOS today.


Essential Steps to Network Architecture

Written by SOS Tech Group on . Posted in Infrastructure Management, IT News

Everyone has seen security checkpoints at the airport. They ensure that only those people who belong at the gate can reach them, and also that there are no bad actors on airplanes. But why are there so many gates? Luckily, they’re labelled in a sequential and logical fashion. So at the airport, multiple security checkpoints keep things safe, locked doors ensure I can’t enter areas I don’t belong, and accurate labelling helps direct everyone to where they need to be, safely.

Network segmentation works similar to security checkpoints and gates on network traffic.

So what is network segmentation?

In very short terms, network segmentation is the concept of taking a computer network and breaking it down, both logically and physically, into multiple smaller fragments. Physical segmentation involves breaking down a network into smaller physical components. It involves investing in additional hardware such as switches, routers, and access points.

While physical segmentation can seem like the easy approach to breaking up a network, it’s often costly and can lead to unintended issues. Think about having two Wi-Fi access points right beside each other, each broadcasting different SSIDs. This would be inefficient and cause many conflicts.

Logical segmentation is the more popular method of breaking a network into manageable chunks. Usually, logical segmentation doesn’t require new hardware, provided the infrastructure is already managed. Instead, logical segmentation uses concepts already built into network equipment, like creating separate virtual local area networks (VLANs) that share a physical switch, or dividing different asset types into different subnets and using a router to pass data between the individual subnets.

Segment a network to achieve the following:

Enhanced Security

By ensuring different groups of devices pass through a firewall, you can apply access control lists to the traffic and enable the concept of least privilege. It also allows the traffic to be inspected by security tools for potential threats. In a world where nothing ever went wrong, there’d be no need to contain a breach or attack. But the reality is that attackers can affect an entire network, unless they’re limited to a local subnet. And when things do go wrong, segmentation significantly reduces your mean time to resolution by narrowing the focus area of your troubleshooting and protection efforts.

Increased performance

Smaller subnets mean fewer devices on each subnet. Fewer devices mean you can build and enforce more granular policies, like access rules, and file permissions. Fewer hosts also mean less traffic and a smaller broadcast domain. Reducing the broadcast domain reduces ‘noise.’ All in, network segmentation contributes to better performance across the entire network and its segments.

Here are some common network segmentation methods:

Creating a guest wireless network

Theoretically a client’s guest network could be both wired and wireless but, almost always, the guest network is primarily wireless. By implementing a new guest SSID and ensuring it’s configured to provide wireless isolation, you’re effectively creating a segment for each user of the guest Wi-Fi, allowing them to see the internet without accessing anything else on the rest of your network.

Creating a voice network

Unlike guest networks that are typically wireless, a voice network is normally wired. Low latency and low jitter are extremely important for voice-over IP phones (VoIP) to get the best call quality, and mixing it with data traffic can reduce that quality. Voice networks are generally segmented into a separate VLAN and use a dedicated IP subnet range, away from routine data traffic.

Separating user groups from services

Does every user need access to the entire network? Should the receptionist in your client’s office be able to pull reports from the accounting system? Probably not. By separating user groups and services into their own segments or subnets, you can create groupings of similar users and services. You can then build data traffic around these groups, ensuring the right people can access the right things.

If you’re experiencing network issues, SOS can help get you where you need to be today.


Information Technology Then and Now

Written by SOS Tech Group on . Posted in IT News

technological audit

Looking back, 1995 was a pretty big year in IT. Almost 40 million people had Internet access, and that new email thing was catching on. The World Wide Web was exploding. Though some predicted the Internet was just a fad, many more went all in, kicking off what would become known as the boom:


Netscape, Microsoft, and Opera all launched their first web browsers.

Search engine AltaVista came online.

Amazon and eBay had just opened up shop.

Jerry Yang and David Filo registered

Hotmail launched.


Despite all this online activity, in 1995 a typical small or mid-sized business handled nearly all of its networking and computing on site. IT closets were crammed with servers plugged into hubs and bridges. Shelves and shelves of servers in the IT closet, all beige of course. Beige ruled in this land of office PCs. Everything was wired. And enormous.


In September, Microsoft made history with the launch of Windows 95, which, for the first time, added a graphical user interface to the company’s operating system. The product raked in $30 million in its first day of sale. Floppy disks, the 3.5” kind, were still plentiful though were slowly being supplanted by CDROMs. Phones sat on desks and were connected through wires. Cell phones — while not quite the bricks they used to be — were still big and relatively uncommon. No one had a Palm device. Software was purchased in a literal box and installed by hand on computers or servers.

All of this hardware was typically maintained by a team of IT specialists. It wasn’t uncommon for a company of under 50 employees to have four or five full-time IT people: a database specialist, a network specialist, a desktop specialist, and so on. If you managed an IT network in 1995, you probably handled Novell, Microsoft NT, and UNIX. And though user-facing operating systems were moving to a GUI, you spent your day in the command-line interface.

In 2019, more than three billion people worldwide have Internet access. Email is ubiquitous, along with instant messaging and texting. There are nearly 1 billion websites online right now, and that number climbs by the second. Good, unused domain names are scarce on the ground, giving rise to dozens of new top-level domains to satisfy demand.

And that typical small to mid-sized office now?


Laptops, not huge tower computers.

Smartphones (likely brought from home by employees) that come on and off the corporate network throughout the day. Wi-Fi everywhere.

VoIP for desk phones — if the company even has desk phones anymore.

Storage? It’s all in the cloud. All automatic.


Software is also in the cloud. Users buy and use what they want as they need it. They no longer need to go through IT for that. Companies still have network infrastructure on site — routers and switches have largely taken the places of hubs and bridges, and now there are wireless controllers, firewalls, perhaps a load balancer. But most of those servers are gone. So is the specialized team that used to maintain them. Now, a company of 50 employees might have one IT administrator, a generalist who keeps everything running. The budget that IT used to have for purchasing equipment and software for the office, and completing complex projects, has shifted away. It’s been allocated to finance, marketing, HR, and the other lines of business so they can buy the SaaS tools they need. No one knows what a DOS interface looks like anymore — except the IT administrator, who’s still working in the CLI all these years later.


So what’s the upshot of all this change?


In 2019, the IT function is more critical to business than it has ever been. In 1995, a user could work happily and productively all day long and not once need to access the Internet. Not being able to print was an inconvenience, sure, but they could do something else while that was being fixed. Now, if the network goes down, so does the business. Every system and every person in an organization relies on the network to get things done. And yet, IT no longer has the specialists or the budget to manage this business-critical operation. To make things worse, the IT administrator’s tools have not kept pace with change. It’s no wonder that in-house IT teams are struggling.

IT is overdue for a system that makes network operations easier, a system that recognizes the nature of today’s hyper-connected businesses. At SOS we understand these systems through and through. Partner with us today.

Tech Headaches? We can help! Contact us now »