There are multitude of reasons many companies are migrating to the cloud. Some are migrating to the cloud to aid in increasing the productivity of their IT staff, as well as the overall workforce. Others are looking to scale down data centers, help to lessen infrastructure sprawl, and modernize legacy applications. Additionally, some organizations are re-thinking…
Natural disasters such as hurricane’s, earthquakes, and fire can put a school district’s data out of reach. These are obvious reasons to have a solid disaster recovery strategy in place. In the aftermath of Superstorm Sandy that hit the East Coast (NJ, NYC, and Long Island), there were several school districts that were unable to gain access to their systems for days or weeks after the storm had passed. This made it impossible to generate transcripts, pay bills, and in some cases, process payroll.
Whether you are planning to migrate a single critical application, or a major portion of your infrastructure, thorough research and a mindful approach are needed before transitioning to the cloud. Many IT groups have struggled moving key enterprise applications to the public cloud, learning from their mistakes, they used these lessons learned for greater success in subsequent migrations.
If you’re one of the many thinking of moving your IT infrastructure to the public cloud or have committed to the idea, but are struggling how to go about it, you don’t want to be the one caught trying to re-create the wheel only to fail miserably. Using the lessons learned from those that have gone before you, helps to maximize your chances of a successful cloud migration on the first attempt. If done right some of the benefits to be realized are reduced cost, streamlined day-to-day operations, IT team expansion, flexibility, and scalability, just to name a few.
The IT job market has always shifted as technologies advanced, but cloud computing has pushed changes in the market to speeds never seen before. The job market for cloud architects changes as rapidly as the technology itself. At AWS Re:Invent 2018 last week, AWS announced 30+ new significant services alone. Then there is Microsoft, Google, and all the smaller players to keep track of.
This months podcast features Matthew Pascucci, cybersecurity practice manager at CCSI, speaking with guest CISO Patricia Smith from Cox Automotive, on vulnerability management in the Cloud. Does vulnerability management change depending on deployment model? How to you measure cloud vulnerability metrics? Patricia Smith and Matthew Pascucci touch upon these and more in this podcast episode.
As more enterprise IT operations organizations move to container technology, IT administrators are having to morph into DevOps roles to deal with the container orchestration systems within IT production. These include systems like Docker Swarm, Apache Mesos, and Google Kubernetes, as well as a handful of lesser known players. Container technology has become a reliable way to quickly package, deploy and run application workloads without the need for concern of the physical underlying hardware or operating systems.
Just as important as the containers themselves is the container orchestration technology. These products allow you to start and stop containers through scheduling. They also allow you to scale container usage through managed container clusters. Enterprise data centers have come to expect 99.99% uptime, and introducing new technologies puts a lot of pressure on those individuals expected to run them.
Containers and microservices are becoming a very popular option for deploying applications. There are many benefits of containers, faster deployments, reproducibility of environments, cost optimizations, isolation, and flexibility in general.
There is one glaring problem that is seen right after initial deployment, monitoring and troubleshooting is exponentially more complex when it comes to containers. Containers are designed to run programs in an isolated context, and that means that they tend to be opaque environments. Because of this, the same visibility tools we’ve all been using for years are now failing to perform as expected. Now, you suddenly realize you are flying blind.
This months podcast features host Larry Bianculli speaking with Joe Goldberg, Cloud Practice Manager, at CCSI, on containers. What are containers, how do they benefit your organization, and where to begin.
Site Reliability Engineering (SRE) is a practice that combines software development skills and IT operations into a single job function. Automation and continuous integration and delivery are used to reach the goal of improving highly dynamic systems. The concept originated with Google in the early 2000s and was documented in a book with the same name, Site Reliability Engineering (a must read). SRE shares many governing concepts with DevOps—both domains rely on a culture of sharing, metrics and automation. SRE can be thought of as an extreme implementation of DevOps. The role of the SRE is common in cloud first enterprises and gaining momentum in traditional IT teams. Part systems administrator, part second tier support and part developer, SREs require a personality that is by nature inquisitive, always acquiring new skills, asking questions, and solving problems by embracing new tools and automation.
Microsoft’s had significant difficulties recovering from its most severe Azure outage in years. On September 4, 2018 there was a weather related power spike at Microsoft’s Azure South Central U.S. region in San Antonio. That surge hit crippled their HVAC system. The subsequent rising temperatures triggered automatic hardware shutdowns. More than 30 cloud services, as well as the Azure status page were taken out in the process.