Top 3 best practices for AWS
Here at Nymlogic, our business is founded on our passion for working with our customers and helping them maximise the return on investment made in AWS. Our aim is to deliver increased business benefit through technical excellence. Nymlogic AWS Practice Principals take time to consult with each of our individual customers in understanding specifically what the blending of both the business and the technical outcomes need to achieve for measurable success.
Often, our customers start by asking us what we consider as the top three best practices for AWS. Whilst each AWS deployment is unique in one way or another, here we list our generic top 3 best practice tips that are applicable for most.
Sizing & cost alignment
Right sizing is critical. Designs with significant over capacity introduce cost with few benefits. Architectures with little thought towards futureproofing create suboptimal investment. Designs that fail to anticipate and calculate growth, also introduce cost by having to make provision changes: either too frequently or unexpectedly.
The mantra is simple; failure to right size = unnecessary cost
Always seek to ensure there remains a constant linkage and audit between the business requirement and usage demands, versus budget spend. Making users aware that they are front and centre in helping to optimise costs is one tried and trusted way that delegates responsibility and accountability to the operational ‘front line’, and where significant budgetary impact can often be made.
Tracking tools and monitoring are one way to maintain the disciplines of resource instances mapping to user requirement. This can be achieved through a variety of automated tools, often having the capability of dynamically updating and offering users guidance on best resource usage and restrictions. Again, bringing visibility towards users operating on the ‘front line’ and who are often best placed to realise budgetary optimisation gains when mapped to effective resource sizing.
One such tool is AWS CloudWatch, which monitors log files and utilises AWS Auto Scaling features. This operates at both administrative and user level. Authorised users or groups have visibility of resource requirements and where changes in the level of resources can or should be made. Excess resource provisioning and the associated costs can therefore be negated through tailored optimisation.
Storage is another determinant in selection of the most effective storage class versus cost. Data storage, applications and performance needs will form part of the design selection and sizing criteria. Flexible and dynamic storage capacity can be achieved through an ‘on demand’ approach that provides flexible capacity requirement with associated costs that track demand fluctuation. Serverless
architectures for some applications provide additional options that should be
explored. Simply put, this means is that you effectively order a meal but only pay by the forkful!
This list is far from exhaustive but should serve to highlight the emphasis on designing solutions that can include dynamic and optimised architectural provisioning, in tandem with the capability of dynamically measuring cost and performance.
Security should always play an important part of any AWS integrated solutions approach. Importantly and as a distinction, wherever possible, security is not bolted on as an afterthought, but as integral element of the overall architecture and design from the get-go.
Protect resource access – security 101 is to provide the least user privileges in terms of permissions necessary to execute assigned tasks or gain access to data and applications commensurate with role or user. Ensure privileges map to security policies. Root is the crown jewel and appropriate additional security access controls must be implemented.
Multi-Factor Authentication on root - Only permit root access and associated privileges to the most senior administrators and restrict creation to the very least number of administrative accounts that is practical. Implement multi-factor authentication so that multiple credentials from differing and disparate sources are required to be presented and verified to gain access to root.
Secure applications – build a secure architecture by introducing security at multiple access points, from the perimeter (edge) firewall through to dedicated firewalls that defend critical servers, applications, and any DMZ’s. Network and Application firewalls form an integral part of the security arsenal, together with assigning security groups. Only open the most minimal of firewall ports to permit access to required applications and servers. DDoS is a growing challenge and appropriate protection of the transport and network layer is critical in most externally facing architectures.
Identity and Access Management - Creation of identity management for authentication and authorisation is a security imperative. Establish security policies that map to what levels of permissions are granted access, and to which specific resource. AWS offers an IAM console which allows configurable identity and access management permissions using a simple GUI. Understand that configuration errors can result in potential security breaches and can represent significant risk of exploit. Therefore understand the value of what separation of duties can deliver in terms of having a delineated and separate operations teams for security, network and server.
Resilience and Redundancy
Business continuity should be one of the key design attributes. Business downtime represents a huge risk to most businesses. The risk is measurable, both quantitively and qualitatively. As a consequence, only in exceptional circumstances should there ever be any architectural designs, without appropriate regard to architectural and operational resilience and redundancy. These measures should be built in as standard for the overwhelming majority of AWS architectural designs, providing the rule rather than the exception.
Multiple data centres that are both logically and physically separated is a well trusted, tried, and tested fail-over design that provides data, services, and application persistence. AWS Availability Zones provides applications with both high availability and reliability. When designed in tandem with Amazon VPV it results in having resources on the same logical network across multiple data centres.
Hosts on AWS are dynamic and virtual. There are no physical appliances. Instances can sometimes be lost. Should the given instance be terminated for any reason there needs to be a way to restore the host. This results in the potential requirement of the need to bootstrap a device in a virtualised cloud environment and must be factored into the design with appropriate fault tolerance and host resuscitation capability.
Back up regularly, using appropriate back up tools and snapshots. Document, plan and rehearse and test back up options – before you need them! Understand if your business has regulatory directives that demand appropriate back up measures, associated timelines and data retention periods, then design needs to underpin policies. Compliance policies are often the principal driver for ensuring adequate controls are in place in relation to back-ups. AWS offers back up centralised management, command line, or there are several API’s that may be introduced, together with third parties.
The subject of resilience and redundancy is expansive, and where we have highlighted these key areas should serve to demonstrate just a few of the many considerations that we at Nymlogic focus our attention towards in developing an overall AWS resilient, redundant and fault tolerant architectural design.
For more information, please go to: www.nymlogic.com