Implementation of Project Logic - Technology
Pular para o conteúdo

Implementation of the Design Logic

Adverts

Implementation Approach Philosophies

waterfall approach

The waterfall approach to implementing an application requires a designer to consult with one or more representatives of the end-user organization and write down all of the application's specifications. Typically, specifications come in a set of functional documents or use cases, written in such a way that the end user can easily read and understand the documents.

The end user signs these documents, and the documents are then collected by the technical design team who design the application, creating various artifacts such as class model diagrams, state diagrams, activity diagrams, and data models. The goal of this phase is to write everything in such detail that a developer will have no trouble creating the necessary code. There is a formal handover of the design to the development team and to the test team. After delivery, the development team starts coding and the test team uses the technical design in combination with the use cases to create test cases and test scenarios.

After the development team finishes coding, the code is handed over to the test team. The test team performs the tests it has designed based on the requirements and the detailed design. Any issues will be fixed by the development team. Once the testing and fixing process is complete, the application is delivered to the end user for acceptance testing. The end user performs a final check to see if the application complies with the initial requirements. If approved, he approves the finished product and the project is completed. After the development team finishes coding, the code is handed over to the test team.

The test team performs the tests it has designed based on the requirements and the detailed design. Any issues will be fixed by the development team. Once the testing and fixing process is complete, the application is delivered to the end user for acceptance testing. The end user performs a final check to see if the application complies with the initial requirements. If approved, he approves the finished product and the project is completed. After the development team finishes coding, the code is handed over to the test team. The test team performs the tests it has designed based on the requirements and the detailed design.

 Any issues will be fixed by the development team. Once the testing and fixing process is complete, the application is delivered to the end user for acceptance testing. The end user performs a final check to see if the application complies with the initial requirements. If approved, he approves the finished product and the project is completed.

The end user performs a final check to see if the application complies with the initial requirements. If approved, he approves the finished product and the project is completed. The end user performs a final check to see if the application complies with the initial requirements. If approved, he approves the finished product and the project is completed.

A project can have more or less phases when using the waterfall approach, but the key feature is a very formal start and end of each phase, with very formal deliverables.

The advantage of the waterfall approach is that the responsibility of the team responsible for each phase is greater. It's clear what they need to deliver, when they need to deliver it, and who they need to deliver it to. Often, the development team will not need to interact with the user. This can be very useful when outsourcing development to a different country.

The main disadvantage of the waterfall approach is that, in an environment where everything is organized in a very formal way, the flexibility to respond to changes decreases. Even moving needs to be organized. Very few companies seem to do this effectively, often resulting in a significant increase in overhead costs. To manage the costs of a project, some companies even delay any changes to requirements until the initial application delivery, effectively delivering an application that does not meet the end user's needs.

agile development

Many long-running software development projects went over budget and did not deliver the product on time. The premise of the agile software development philosophy is to minimize risk by developing software in short time boxes, called iterations, which typically last from one to four weeks. Each iteration is like its own miniature software project and includes all the tasks necessary to release the increment of new functionality: planning, requirements analysis, design, coding, testing, and documentation. While an iteration may not add enough functionality to warrant product release, an agile software project aims to be able to release new software at the end of each iteration. At the end of each iteration, the team reevaluates the project's priorities.

The goal of agile software development is to achieve customer satisfaction through the rapid and continuous delivery of useful software; always aiming to build what the customer needs; welcome, rather than oppose, late changes to requirements; regularly adapt to changing circumstances; to have close and daily cooperation between entrepreneurs and developers, in which face-to-face conversation is the best form of communication.

The main advantage of agile software development is the flexibility in dealing with changes, always aiming to deliver according to business needs. The downside, of course, is an increase in the complexity of managing scope, planning, and budgeting. Another common risk is limited attention to (technical) documentation.

Incremental Development

Incremental software development is a mix of agile and waterfall development. An application is designed, implemented, and tested incrementally so that each increment can be delivered to the end user. The project is not completed until the last increment is completed. It aims to shorten the cascade by defining intermediate increments and using some of the advantages of agile development. Based on feedback received on a previous increment, adjustments can be made when delivering the next increment. The next increment can consist of new code as well as modifications to previously provided code.

The advantage is that formalities remain in place, but change management becomes easier. The cost of testing and deploying an application multiple times will be greater than doing it just once.

Program Flow Control

Choosing an approach to program flow control is a very architectural task. The goal is to create a blueprint of your application where, once you start adding functionality and code, everything seems to have its own place. If you've ever reviewed or written high-quality code, you understand this principle.

Organizer Code

The first step in designing program flow is to organize the code by establishing a set of rules to help create a blueprint, or outline, of the application. Maintenance, debugging, and bug fixing will be easier because the code is located in a logical location. Once you've done the groundwork, you can choose an approach to implementing your application's logic.

Design patterns should play an important role in the design of program flow control. Over the years, a lot of code has been written and many solutions have been designed for recurring problems. These solutions are laid out in design patterns. Applying a design pattern to a common software design problem will help you create solutions that are easily recognizable and can be implemented by your peers. Unique problems will still require unique solutions, but you can use design patterns to guide you in solving them.

Creating the Project

layers

The first step is to consider logical layers. Note that layers are not the same as layers, often confused or even considered the same.

layers versus layers

Layers are all about creating boundaries in your code. The top layer can have references to code in layers below, but a layer can never have a reference to code in a layer above. Tiers refer to the physical distribution of tiers across multiple computers. For example, in a three-tier application, the UI is designed to run on a desktop computer, the application logic is designed to run on an application server, and the database runs on a database server. of dedicated data and the code in each layer can consist of several layers.

Figure 8-1: Basic three-tier organization

Layers refers to levels of abstraction. The layers shown in Figure 8-1 hold true for most applications. These levels are also referred to as the three main layers and may have various other names. As a rule, code in the presentation layer can call services in the application logic layer, but the application logic layer must not call the method in the presentation layer. The presentation layer should never directly call the data access layer, as this would bypass the responsibilities implemented by the application logic layer. The data access layer should never call the application logic layer.

Layers are just an abstraction and probably the easiest way to implement layers is to create folders in your project and add code to the appropriate folder. A more useful approach would be to place each layer in a separate project, thus creating separate assemblies. The benefit of putting your application logic in a library assembly is that it will allow you to create unit tests, using Microsoft Visual Studio or NUnit, to test the logic. It also creates flexibility in choosing where to deploy each layer.

Physical Layers

In an enterprise application, you would expect to have multiple clients for the same logic. In fact, what makes an application an enterprise application is that it will be deployed in three layers: client, application server, and database server. The Microsoft Office Access application created by your company's sales department, while very important to the sales department, is not a corporate application.

Note that application logic and data access layers are often deployed together on the application server. Part of designing the project is choosing whether to access the application server using remote .NET or Web services. Whichever you choose, you'll add some code to easily access remote services in the presentation layer. If you are using web services to access services on your application server, Visual Studio .NET will do the work for you and generate the proxy code, automatically providing an implementation of the remote proxy pattern.

Adding Patterns to Layers

The three basic layers provide a high-level overview. Let's add some structural patterns to create a robust enterprise architecture. The result is shown in Figure 8-2.

Focus on the application logic layer. Figure 8-2 shows that accessing application logic is using the facade pattern. A facade is an object that provides a simplified interface to a larger body of code, such as a class library. A facade can reduce external code dependencies on the inner workings of a library because most code uses the facade, thus allowing more flexibility in system development. To do this, the facade will provide a coarse-grained interface to a collection of fine-grained objects.

decision flow

Program flow control, also known as decision flow, concerns how you design the services in your application logic layer or, as you saw in the previous paragraph, how you design the methods in your facade.

There are two approaches to organizing your services:

  • action oriented
  • state driven

Action oriented approach

By organizing services based on user actions, you are implementing application logic by offering services, each of which handles a specific request from the presentation layer. This is also known as the transaction script pattern. This approach is popular because it is simple and looks very natural. Examples of methods that follow this approach are BookStoreService.AddNewOrder(Order order) and BookStoreService.CancelOrder(int orderId).

The logic needed to perform the action is implemented very sequentially within the method, making the code very readable but also harder to reuse. Using additional design patterns, such as the table module pattern, can help increase reusability.

State driven approach

It is also possible to implement the application's decision flow in a much more state-oriented way. Services offered by the application server are more generic in nature, for example BookStoreService.SaveOrder(Order order). This method will examine the status of the order and decide whether to add a new order or cancel an existing order.

Data structure projects

You must make several choices when designing your data structures. The first choice is the data storage mechanism, the second is the intended use of the data, and the third is version requirements. There are three ways to look at data structure designs:

  • The services offer data; data is a reflection of the relational database.
  • Data must be mapped to objects and services provide access to objects.
  • The data offered by services must be schema-based.

Choosing one of the three as the basis for your dataflow structure should be done at an early stage of the design process. Many companies have a company guideline that mandates one of three options on all projects, but where possible, you should re-evaluate the options for each project, choosing the optimal approach for the project at hand.

Choosing a Data Storage Engine

When designing your application, you will undoubtedly have to design some kind of data store. The following stores and forms of data storage are available:

  • Record
  • app.config file
  • xml files
  • plain text files
  • Data base
  • message queuing

Each store has its own unique characteristics and can be tailored to specific requirements.

Designing the data flow

Data flow using ADO.NET

By implementing data-centric services in the application logic layer, you will design your data flow using ADO.NET. The .NET Framework class library provides an extensive application programming interface (API) for manipulating data in managed code. Referred to as ADO.NET, the API can be found in the System.Data namespace. Complete separation of data carriers and data stores is an important design feature of ADO.NET. Classes like DataSet, DataTable, and DataRow are designed to store data but retain no knowledge of where the data came from. They are considered data source agnostic. A separate set of classes, such as SqlConnection, SqlDataAdapter, and SqlCommand, take care of connecting to a data source, retrieving data, and populating the DataSet, DataTable, and DataRow. These classes are located in sub-namespaces like System.Data.Sql, System.Data.OleDB, System.Data.Oracle and so on. Depending on which data source you want to connect to, you can use classes in the right namespace, and depending on the scope of the product you're using, you'll find that these classes offer more or less functionality.

Since the DataSet is not connected to the data source, it can be used quite successfully to manage the flow of data in an application. Figure 8-5 shows the data flow when doing this.

Let's take a look at this project and imagine that someone has logged into your bookstore and ordered three books. The presentation layer managed the state of the shopping cart. The customer is ready to place the order and has provided all the necessary data. He chooses to send order. The web page transforms all the data into a DataSet containing two DataTables, one for order and one for order; inserts a DataRow for the order; and inserts three DataRows for the order lines. The web page then displays this data back to the user once again, binding data controls against the DataSet and asking Are you sure? The user confirms the request and it is submitted to the application's logical layer. The application logic layer checks the DataSet to see if all required fields have a value and performs a check to see if the user has more than 1000 US$. 00 on outstanding bills. If all goes well, the DataSet is passed to the data access layer, which connects to the database and generates insert statements from the DataSet information.

Using the DataSet in this way is a quick and efficient way to build an application and use the power of the Framework Class Library and ASP.NET's ability to bind data to various controls such as the GridView against a DataSet. Instead of using simple DataSet objects, you can use Typed DataSet objects and improve the coding experience by implementing code in the presentation layer as well as the application logic layer. The advantage of this approach is also the disadvantage of the approach. Small changes to the data model do not necessarily lead to many methods having to change their signatures. So in terms of maintenance, this works really well. If you remember that the presentation layer is not necessarily a user interface, it can also be a web service. And if you modify the definition of the DataSet, perhaps because you are renaming a field in the database, then you are modifying the contract that the web service subscribes to. As you can imagine, this can lead to some significant issues. This scenario works well if the presentation layer is just a user interface, but for interfaces to external systems or components, you'll want to hide the inner workings of your application and transform data into something other than a direct clone of your data model and you will want to create Data Transfer Objects (DTOs).

Data flow using object relational mapping

Dataflow using ADO.NET is a very data-centric approach to managing dataflow. Data and logic are discrete. The other end of the spectrum is taking a more object-oriented approach. Here, classes are created to group data and behavior. The goal is to define classes that mimic the data and behavior found in the business domain for which the application was created. The result is often referred to as a business object. The collection of business objects that make up the application is called the domain model. Some developers claim that a rich domain model is better for designing more complex logic. It is difficult to prove or disprove such a statement. Just know that you have a choice and it's up to you to make it.

Figure 8-6 shows a data flow similar to Figure 8-5 , except now you've added the object relational mapping layer and replaced the DataSet objects with different data carriers.

Now do the same step by step as before; imagine that someone connected to your bookstore and ordered three books. The presentation layer managed the state of the shopping cart. The customer is ready to place the order and has provided all the necessary data. He chooses to send order. The web page turns all the data into a DTO, holding data for one order and with three order lines, creating the objects as needed. The web page displays this data back to the user once again, data binding controls against the DTO using the ObjectDataSource in ASP.NET 2.0 and asks Are you sure? The user confirms the choice and the DTO is submitted to the application's logical layer. The application logic layer transforms the DTO into a business object of type Order with a property to contain three OrderLine objects. The Order method. Validate() is called to validate the order and verify that all required fields have a value, and a check is made to identify whether the user has more than R$ 1,000.00 in pending slips. To do this, the order will call Order.Customer.GetOutstandingBills(). If all is well, the Order.Save() method is called. The request will go through the object relational mapping layer, where the request and the rows of the request are mapped to a DataTable in a DataSet, and the DataSet is passed to the data access layer, which connects to the database and generates insert statements from the information in the DataSet. There are, of course, many ways in which object-relational mapping can occur, but not all of them will include transformation to a DataSet. Some will create the insert statement directly but still use the data access layer to execute that statement.

As you can see, some transformations take place. The use of DTOs is necessary because a business object implements the behavior and the behavior is subject to change. To minimize the impact of these changes on the presentation layer, you need to transform the data out of the business object and into a data transfer object. In Java, the data transfer object is often referred to as the value object.

A big advantage of working with business objects is that it really helps to organize your code. If you look back at a complex piece of logic, it's usually very readable because there's very little plumbing code. The downside is that most data stores are still relational and mapping business objects to relational data can become quite complex.

schema-based services

You've just seen two opposites when it comes to managing the flow of data. Many variations are possible. A common one is the variant in which a dataset is used as the basic data carrier of the UI for storing data, but separate schemas (DTOs) are used for web services called from other systems. The application layer transforms relational data into a predefined schema. The main advantage of this is that any application that references the service does not depend on any kind of internal implementation of the component. This allows for more flexibility in versioning, backward compatibility of interfaces, and the ability to change the component's implementation without changing the service's interface.

Of course, you can use business objects in the web application and bypass the DTO transformation, but this usually only works well if the application logic is implemented along with the web application. Remember that to call Order.Save() you will need a database connection. Whether this is desirable is up to you and probably your director of security.