Technology: Understanding the digital landscape
January 21, 2020  

Supplied by entelect2013 Administrator from entelect2013
Software plays a role in almost every part of every business. Technical teams are responsible for understanding the digital landscape within the organisation and outside of it. This includes selecting the most sensible tools and frameworks to solve problems, and designing software using best practices to enable maintainable solutions that can evolve going forward.

 

Software plays a role in almost every part of every business. Technical teams are responsible for understanding the digital landscape within the organisation and outside of it. This includes selecting the most sensible tools and frameworks to solve problems, and designing software using best practices to enable maintainable solutions that can evolve going forward.

 

In order to make system design decisions, some level of understanding is needed of what the envisioned functional and non-functional requirements are. The following questions need to be answered:


•    What is the problem we are trying to solve?  
•    Who/what will use it?  
•    What does the existing technology landscape look like and what integrations are needed?  
•    How must it scale and perform?  
•    Who is going to develop and maintain it?  
•    What technical skills are available?

 

Taking the answers to these questions into account, we will be able to make critical decisions like:


•    Where the solution will be deployed and hosted?
•    Which technologies will be used to build the solution?
•    What the plan is for scaling up under load?
•    What security measures are needed?
•    How does the solution fit into the greater technology ecosystem?

 

There are several technology and deployment stacks available today, each with their own advantages and disadvantages. There are differences to what each technology is good at and the ease with which they can be used to implement different types of solutions, but often the deciding factor is what skills are available in the team who is to develop and maintain it and where the team is going strategically. As new technologies come out frequently one does want to be a little conservative: one should make sure any proposed technology has a proven track record with regards to scalability and maintainability.

 

1. Architecture
When designing a solution there are many technical architecture considerations known as non-functional requirements that need to be considered. Even if most of these requirements are not implemented at the start, it is essential to keep them in mind from the beginning and have a clear understanding of how provision will be made when the time comes to cater for them. This is needed because these types of requirements typically take much more effort to implement later in the life of the solution if no consideration was given to them early on. Some examples are mentioned on the next page.

 

•    Scalability
This is how system resources are added to handle a growing amount of work, e.g. adding memory to the database host or adding another node on a cluster. Some applications (especially legacy applications) can only be scaled vertically. This means more resources like memory or CPU power must be added to a single server in order to get better performance. However this has physical limits, and costs increase faster than the improvement in performance. It is preferable for an application to allow horizontal scaling to allow adding more average powered nodes to a cluster for better performance. This is typically more cost-effective and has many secondary benefits, such as better availability and reliability.

 

•    Throughput
The rate at which the system must be able to handle requests, e.g. the number of transactions per second the system must cater for.

 

•    Performance
The amount of useful work accomplished by the system with a given amount of resources, e.g. having a low response time or low resource utilisation.

 

•    Volumes
How much data the system must be able to handle, e.g. storing two million records per year or handle 1MB file imports.

 

•    Security
The resilience the system has against potential harm, unwanted changes and data leaks. No system is 100% secure. The key business risks need to be understood for appropriate technical solutions to be implemented. For instance, proper web security and encryption are sufficient for most public facing web applications. However when money is involved, e.g. with a bank website, extra levels of authentication, encryption, monitoring and auditing are required. Threat modelling is a technique for identifying, preventing and mitigating threats in an optimised way by determining where the most effort should be applied as the system and external factors change.

 

•    Reliability
The probability that the system behaves as designed. No system is 100% reliable, so it needs to be understood what the risk to the business is if a single transaction fails due to system failure. The envisaged design needs to take this into account and find solutions to have the desired reliability, e.g. the system can automatically retry certain steps that are susceptible to network timeouts.

 

•    Availability
The proportion of time a system is in a functioning condition. No system is 100% available so the business risks related to any downtime needs to be understood and catered for. If the business loses money for every second the system is down, a high availability like 99.99% up-time may be the service level agreement, thus requiring many levels of redundancy in the system.

 

•    Maintainability
The ease with which defects can be fixed or new features added to the system. Short-lived research applications might not have the same maintainability requirements as a core business application that must last more than 10 years. This is a balancing act of spending extra effort to make the system more maintainable versus using the time to add new features faster. Developers with different personalities types tend to have wildly different natural biases in this regard, so the vision of the business must be clear so that good decisions can be made by the team.

 

•    Internationalisation
Should the system be able to be adapted to various languages and regions without engineering changes? If a front-end does not cater for this from the start, it can be a lot of effort to add this later in the lifecycle.

 

2. Front-end
Most modern applications are mainly accessed via a web front-end or a mobile application. Unless there are specific business requirements, it usually makes sense to start off on a responsive web application that works on different devices – the userbase should determine how much effort to put into making it scale well to mobile devices. This is highly dependent on the context of the problem and the users involved.

 

A public facing application will almost always need to function well on both mobile and desktop environments, whereas a business facing application might only ever be accessed on a specific platform.  

 

o    Web
Depending on the business requirements, there are several options for making the system available online. Informative websites typically only contain static web content which is fairly simple to host on premises (low-medium volume) or in the cloud. For high volume static content, a content delivery network (CDN) might be used which spreads the load across the globe.  

 

There are different ways of implementing web applications that support user interaction. Initially web applications were built using frameworks that rendered the final HTML on the server but this does not scale well. Now we mostly host a single static web page containing a JavaScript library (SPA) on the server and let the final HTML rendering happen on the client browser. A Bootstrap page is loaded first and any additional resources are retrieved as needed. Data access is achieved using data exchange paradigms like REST which is more efficient than transferring fully rendered web pages.

 

In order to provide a native-feeling application via the web browser one can implement a progressive web application (PWA). They are built using common web technologies including HTML, CSS and JavaScript so they are intended to work on any platform that uses a modern standards-compliant browser. Functionality that a normal web application doesn’t typically provide but that native mobile or desktop apps do provide, can be obtained when implementing a PWA, e.g. working offline, push notifications, device hardware access, enabling creating user experiences similar to native applications on desktop and mobile devices.

 

o    Mobile

A In order to provide a mobile presence, one can implement multiple native applications using the platform’s native tools. This is typically the best user experience for rich applications that need a lot of device interaction and offline capabilities. However, since a completely different application needs to be implemented for each mobile platform (Android, iOS, etc.), it can be costly and finding specialised developers could be a challenge.

 

There are alternative options like using a cross-platform framework where one application is developed that works on multiple mobile platforms with a lot less effort than supporting multiple native apps. The frameworks are not perfect as there are limitations and potentially a slightly less-than-native feel to the application, but depending on the business requirement this could be an option.

As mentioned in the web section, PWAs are also an option with even less specialist skill requirements and effort, but with more limitations.

 

3. Back-end
As applications have grown larger and more complex over time, with higher demands on scalability and availability, application architectures have adapted accordingly to support them. Instead of building a monolithic application that is hard to scale out and maintain, applications are typically broken up into several smaller applications or micro services – each with its own focus, maintenance lifecycle and non-functional requirements (scalability, availability, etc.) .  

 

Splitting up applications into small pieces helps teams to understand what each service does without keeping the complexity of the entire application in mind. This requires deployments to be automated, services to be automatically discoverable and connections routed accordingly. Asynchronous communications help the system to cope better with spikes and overall stability.

 

As back-end systems are broken up into smaller pieces that can be maintained by separate teams, it can make sense to also split up the front-ends. This ensures that each team is only responsible for maintaining their little piece and allowing different maintenance and framework upgrade cycles – which can be difficult in a big front-end application. There is typically an overarching application that stitches together the individual pieces.  

 

Historically all applications were hosted in private data centres that had to be maintained by teams of specialised system administrators. Cloud computing seeks to commoditise computing infrastructure by implementing large scale data centres that can be optimised and provide highly reliable and scalable infrastructure with less effort to get set up and maintain. Most hosting is quickly moving to the cloud, as it is more cost effective and reliable in most cases. However, many businesses still use physical and virtual servers. Private cloud hosting can be an option, but one would still need a team that can support it.  

 

To be cost effective in the cloud requires that the application be designed to work efficiently in the cloud. The traditional mindset of having a virtual server that you maintain manually in the cloud will be more costly. With the ‘serverless’ model, the application pieces are hosted on shared infrastructure that is highly tuned for each type of workload. That way the business just pays for each usage of the system instead of renting fixed hardware that sits idle most of the time.  

 

4. Persistence
This is an area where incorrect decisions can have a long-term impact, beyond the life of the solution itself. Data stores tend to outlive the application itself and have value to the business outside of just what the application it was built for does.  

 

There are some important questions to ask when choosing a persistence layer:


•    What is being stored, and for what purpose?  
•    Is the data heavily numeric with a requirement to perform complicated calculations on that data?  
•    Is it high-volume event data from IoT devices where the individual events are less important than trends and anomalies?  
•    Is it unstructured, where there’s no defined or expected schema, or where the schema is expected to change frequently?  

 

•    Types of persistence

There are several types of databases available today. The relational database - historically the most popular database type - is joined by document databases, graph databases, key-value stores and more. The choice of database type should be guided by the type of data being stored, not by what is popular in the media. A poor decision here can have major repercussions throughout the application’s development and usage.

 

•    Picking a solution
The type of data being stored is not the only consideration. Consistency and data security are two other important aspects that must be taken into account before deciding on a database platform.

 

Consistency is about the behaviour of the system under concurrent reads and writes. If three processes write to the database and then a fourth one reads, is it important that the fourth process sees all three of the writes? Is it going to break anything in the application if it only sees the first two? Or if it only sees the third write? Typically, relational database engines only support strong consistency, meaning that in the example the fourth process would always see all three writes, while non-relational databases support eventual consistency, meaning the fourth process could see any combination of the writes, including none of them. There are exceptions however, with some modern database systems offering selectable consistency models.

 

Data security has always been important, but the increasing frequency of data breaches and the existence of laws such as GDPR have brought it into focus. This isn’t just a matter of whether the database engine has any known security vulnerabilities. It’s also about auditing, encryption, granularity of permissions and a lot more.

 

Is there sensitive data in the database that needs to be stored encrypted? If so, and the database chosen does not support such encryption, then the application development gains extra complexity as the encryption would have to be implemented within the application. Depending on the requirements, it may also be worth investigating how and where keys can be stored, what kind of work would be required to rotate keys, and whether the encryption protects the data against the administrators of the servers or not.

 

Does access to certain types of data need to be tracked and audited? Do changes need to be audited? Do schema changes, in the case of relational databases, need to be audited? These are often not obvious requirements when the initial application requirements are detailed, but they are critically important nonetheless.  

 

Once all the technical requirements are out of the way, there’s also the question of existing skills and investments. This isn’t just a matter of what the development team is familiar with, but also what your operations teams are familiar with. Training is costly and time-consuming, so introducing a completely new database engine may not be desirable. This goes for infrastructure and cloud investments as well. What you have in terms of infrastructure and where they’re investing in a cloud footprint (or if you're staying completely on-prem) can also affect the choice of database engine used for the application.

 

5. Business intelligence
Today businesses collect large amounts of data and information that can be used to derive real-world value. This is what business intelligence (BI) excels at, as it comprises of the strategies and technologies used by enterprises for data analysis of business information.  This information is often used to provide historical, current and predictive views of business operations and facilitates efficient and effective decision-making, and enables automation of current business processes. Time to value, e.g. BI projects have demonstrated that even in organisations that are improving their BI maturity, the time to value is very low and insights can be obtained in a few weeks. However, to ensure your initiatives are sustainable, the organisation will eventually have to mature and get the necessary infrastructure in place.

 

Most of the market-leading BI tools these days are so similarly matched that the biggest contributing factors in deciding which tool to use comes down to how much data transformation needs to be handled by the tool itself and the total cost of ownership. When a business decides to embark on the BI journey, it's important to take an approach that will future-proof the solution. This means spending most of the time curating your data effectively. Start by establishing a data warehouse solution whereby business principles and logic are built into a foundation layer that is centrally located and controlled.  What this will enable is the ability to implement any BI tool on top of this established data warehouse, and provide the versatility and flexibility to easily substitute the BI tool being used, should there be the need to do so. This ensures the sustainability of an entire BI solution.

 

Once a healthy foundation is set, the BI tools themselves need to be compared to determine which tool would be the best business fit. Some of the aspects that need to be considered are:

 

•    Development time
This involves the amount of time required to get the BI project up and running. The development time directly impacts the overall development cost of the project.  

 

•    User experience
Choosing a BI tool that is easy to use, simple to understand and aesthetically pleasing to look at, will aid in the tools adoption rate and overall success of the solution.

 

•    Security
The capability of restricting data to only be viewed by certain individuals or ensuring that all the data in a BI model is safe has become a standard in all BI reporting tools. There are however various methods that can be used to facilitate this function.

 

•    Self-service
Generally, in any business there should be a few “power users”. The ease with which these users create a particular view, dashboard or develop their own model to save on development costs is referred to as the self-service aspect of a tool.

 

•    License cost
The costs associated with implementing and maintaining the tool generally involves a licensing cost. This could either be a single enterprise wide licensing cost and/or licensing cost per user. This is one of the most important aspects to choosing the correct BI tool for an organisation.

 

•    Functionality
Although a lot of the market-leading BI tools share similar features and have more or less the same capabilities, careful consideration should be given to the overall goal of the organisation and the capabilities that will be required for the particular task at hand.

 

•    Customisability
BI tools offer a wide range of customisability in terms of the look and feel. This usually involves adding corporate identity to any model that will be developed.

 

•    Embedding
Embedding of models have become a very popular addition in many BI tools. This allows models to be incorporated into a web interface allowing a seamless transition from, for example an intranet site to the BI tool. This greatly improves the adoption rate within an organisation as there is no need to alternate between two interfaces to access the tools.  

 

•    Push reports
Many years ago, it was common to pull your reports from the BI tools, however, a shift has come whereby push reporting capabilities have been incorporated. This allows users to be sent reports in various formats via email. Removing the need to go to the tool itself and pull data.

 

•    Integrations
BI tools allow integration between other components such as Python, R and other statistical tools. Additionally, the capabilities of connecting to various data sources forms part of the integration capabilities and is an important consideration as not all of the tools are able to connect to an equal amount of data sources and some tools require additional expenditure to facilitate connecting to particular source systems.

 

6. Take aways
A team with experience using a variety of technologies can give expert advice from lessons learned, what the new trends are and common pitfalls. Make sure the vision for the solution is clear and the functional and non-functional requirements are understood in advance to avoid future re-work, but don’t try to do everything at once: figure out what the MVP is so that a working application can be delivered quickly. Then feedback and changing business requirements can be incorporated to meet the business demands as needed.

Read the full From Here to There publication


Home | About Us | Offerings | Track Record | Giving Back | News | Careers | Contact Us
PAIA Manual | Privacy Policy | As per statute: www.sacoronavirus.co.za |
© Copyright by Entelect. All Rights Reserved.