The article describes and gives some examples of application of the following Architectural patterns:
|
It can be found
The article describes and gives some examples of application of the following Architectural patterns:
|
It can be found
A causal model, or a model of causality, is a representation of a domain that predicts the results of interventions. An intervention is an action that forces a variable to have a particular value. That is, an intervention changes the value in some way other than manipulating other variables in the model. To predict the effect of interventions, a causal model represents how the cause implies its effect. When the cause is changed, its effect should be changed. An evidential model represents a domain in the other direction – from effect to cause. Note that we do not assume that there is “the cause” of an effect; rather there are propositions, which together may cause the effect to become true.
Example
In the electrical domain depicted in Figure 5.2, consider the relationship between poles p1 and p2 and alarm a1. Assume all components are working properly. Alarm a1 sounds whenever both poles are + or both poles are -. Thus,
sounds_a1↔(+_p1 ⇔ +_p2) (*)
This is logically equivalent to
+_p1↔(sounds_a1 ⇔ +_p2)
This formula is symmetric between the three propositions; it is true if and only if an odd number of the propositions are true. However, in the world, the relationship between these propositions is not symmetric. Suppose both poles were positive and the alarm sounded . Putting p1 to negative does not make p2 go negative to preserve a1 keep sounding. Instead, putting p1 to negative makes sounds_a1 false, and positive_p2 remains true. Thus, to predict the result of interventions, we require more than proposition (*) above.
A causal model is
sounds_a1 ← positive_p1 ∧ positive_p2 (1) sounds_a1 ← ! positive_p1 ∧ ! positive_p2 (2)
The completion of this is equivalent to proposition (*); however, it makes reasonable predictions when one of the values is changed. Changing one of the pole positions changes whether the alarm sounds, but changing whether the alarm sounds (by some other mechanism) does not change whether the poles are positive or negative.
An evidential model is
positive_p1← sounds_a1 ∧ positive_p2 (1) positive_p1 ← ! sounds_a1 ∧ ! positive_p 2 (2)
This can be used to answer questions about whether p1 is + based on the charge of p2 and whether a1 sounds. Its completion is also equivalent to formula (*).
However, it does not accurately predict the effect of interventions For most purposes, it is preferable to use a causal model of the world as it is more transparent, stable and modular than an evidential model.
Rothman’s model is a theoretical model that takes into account multi-causal relationship (has its origins on epimediology studies)
Definitions:
Cause: Event, condition or characteristic that plays an essential role in a generation of an effect (a consequence)
Cause types:
1- Component cause: A cause that contributes to generate a “conglomerate” that will produce a “Sufficient () cause”
2- Sufficient cause: A set of (component) causes that will produce an Effect
3- Necessary Cause: (To define)
Model characteristics:
I) None of the component causes is unnecessary
II) The Effect does not depend on a specific Sufficient Cause
III) A component cause can be part of more that one sufficient cause that produces the same Effect
IV) A component cause can be part of different sufficient causes that produces different Effects
V) The component causes of a sufficient cause are linked with other component causes of that sufficient cause (interelation)
This is a sample C++ program that computes configurable length random "nicknames" and selects those nicknames that verifies "Armtrong property".
Armtrong property:
Given the number x1x2x3... it verifies Armtrong if : x1x2x3... = x1^3 + x2^3 + x3^3...
An example is the number 153 = 1^3 + 5^3 + 3^3
However!! this program works with a different criteria: which is x1x2x3...= x1*3+x2*3+x3*3....
For the iterations and nickname lengths specified in the code bellow, we get this result:
Note: Works fine with Codeblocks and gcc.
armtrongnames.cpp
/* * armtrongnames.cpp * * Created on: 12 Dec 2018 * Author: david */ /* * This is an example of Armtrong numbers applied to person names generation: * Thus, we will map letters to numbers as a=1, b=2,...z=25 * Then we will get vocals: a,e,i,o,u * Next we will generate names: cvcvcv(c) and vcvcvc(v), in the same time we will check if the number mapping is an Armtrong number * Finally we will keep the Armtrong names: What this will give us? Armstrong number (modified): x1x2...xn = x1*3+x2*3+...+xn*3 */ #include#include #include #include #include #include #include using namespace std; int isVocal(int pos,vector vocalsW) { int r = -1; for(int i=0;i vocalsW,string letters){ int r = -1; if(pos>0 && pos<=letters.length()){ r =isVocal(pos,vocalsW); if(r==0) r=-1; else r = 0; } return r; } char getLetter(int pos,string letters){ char c = '*'; if(pos>0 && pos<=letters.length()) c = letters.at(pos-1); return c; } int getRandVocalPos(vector vocalsW){ int pos = 0; pos = rand() % 5; return vocalsW[pos]; } int getRandConsPos(vector vocalsW,string letters){ int pos =-1; bool end = false; while(!end){ pos = (rand()% letters.length()) +1; if(isConsonant(pos,vocalsW,letters)==0){ end = true; } } return pos; } int compDec(vector v) { int n = 0; int i = v.size() - 1; int pw = 0; while(i>=0){ if(v[i] != 0 ){ n = n + (v[i] * (int)(pow(10,pw))); pw++; } i--; } return n; } int compArmtrong(int s){ int a=0; int i = 1; int b = s; int c=0; while(b>1){ c = b % i; b = (b-c)/i; i = i * 10; a = a + (c*3); /*a = a + (int) pow(c,3); */ } return a; } int sumVect(vector v) { int a=0; for(int i=0;i v){ int r = -1; int ams = -1; int sum = sumVect(v); ams = compArmtrong(sum); if(ams==sum) r = 0; return r; } string buildName(vector v,string letters){ string name(""); char c; for(int i=0;i ::iterator iter; std::string letters("abcdefghijklmnopqrstuwxyz"); vector vocalsW; int MAXITER = 900; int MAXNAMELEN = 20; int MINNAMELEN = 4; int T = MAXITER * ((MAXNAMELEN-MINNAMELEN) + 1); std:vector *amsnames = new vector (); vocalsW.push_back(1);vocalsW.push_back(5); vocalsW.push_back(9);vocalsW.push_back(15); vocalsW.push_back(21); int i = 0; while(i 0){ vector nameW; if(altern%2==0){ //first a consonant for(int k=1;k<=j;k++){ if(k%2!=0) nameW.push_back( getRandConsPos(vocalsW,letters) ); else nameW.push_back(getRandVocalPos(vocalsW)); } } else { //first a vocal for(int k=1;k<=j;k++){ if(k%2!=0) nameW.push_back( getRandVocalPos(vocalsW) ); else nameW.push_back(getRandConsPos(vocalsW,letters)); } } int r = isArmtrong(nameW); string name = buildName(nameW,letters); cout << name << endl; if(r==0) amsnames->push_back(name); altern--; } j++; } i++; } cout << "+++++++++++++++++++++++++" << endl; cout<< "Generated Armtrong Names:" << amsnames->size() << endl; int cnt = 1; for(iter=amsnames->begin();iter!=amsnames->end();iter++){ cout<< cnt << " :" << *iter << endl; cnt++; } getch(); return 0; }
In the early of 1970s the Artificial Intelligence (AI) was defined as a system based on the Von Neumann model (with a single control center) and the concepts of traditional Psychology. In the late of 1970s the idea of individual behavior was tested from several works that required a distributed control. For example, works related to blackboards (Fennel & Lesser, 1977) and actors (Hewitt, 1977) allowed the modeling of classical problems considering concepts such as cooperation, communication and distribution. Therefore, the researchers started to investigate the interaction between systems, trying to solve distributed problems in a more social perspective.
In order to find solutions for distributed systems, the Distributed Artificial Intelligence (DAI) started in the early of 1980s to be investigated. It combines the theoretical and practical concepts of AI and Distributed Systems (DS). The solution is also based on social behaviors where the cooperative behavior is utilized to solve a problem. The DAI is different of DS because (i) it is not based on the client-server model, and (ii) the DAI area does not address issues related to distributed processing, aiming to increase the efficiency of computation itself (transmission rate, bandwidth, etc). However it aims to develop technical cooperation between entities involved in a system. Also, the DAI differs from AI because it brings a new and broader perspectives on knowledge representation, planning, problem solving, coordination, communication, negotiation, etc.
The Multi-Agent Systems (MAS) is one of the research areas of DAI, and uses autonomous agents with their own actions and behaviors. The agents in a MAS are designed to act as experts in a particular area. The main characteristic is to control their own behaviors and, if necessary, to act without any intervention of humans or other systems. The focus of the designer is to develop agents working in an autonomous or social way, as well as systems of communication and cooperation/collaboration, so that the solution arises from the interactions. This bottom-up approach usually leads to an open architecture, where agents can be inserted, deleted, and reused. According to Sawyer (2003), the Internet is an example of MAS because it is constituted by thousands of independent computers, each on running autonomous software programs that are capable of communication with a program running on any other node in the network.
The term agent is used frequently in AI, but also outside its field, for example in connection with databases and manufacturing automation. When people in AI use the term, they are referring to an entity that functions continuously and autonomously in an environment where other processes take place and other agents exist. The sense of autonomy is not precise, but the term is taken to mean that the agent activities do not requires constant human guidance or intervention (Shoham, 1993). There are a number of good reasons for supposing that agent technology will enhance the ability of software engineers to construct complex and distributed applications. It is a powerful and natural metaphor to conceptualize, design and implement many systems.
This chapter makes an overview of theoretical and technical concepts of Agent-Oriented Programming (AOP). Historically, the AOP appears after the Object-Oriented Programming (OOP). However the differences between them are not clear in the research and development community. Section 2 discusses the differences between objects and agents and also the evolution of the programming language paradigms. Section 3 presents the micro and the macro levels of a society of agents. The pitfalls of AOP are explained in Section 4. Two multi-agent systems platforms are presented in Section 5. In Section 6 two multi-agent applications are presented: a cognitive modeling of stock exchange and a military application of real-time tactical information management.
Procedural programs are typically intended to be executed discretely in a batch mode with a specific start and end (Huhns, 2004). However, the modular programming approach employs smaller units of code that could be reused under a variety of situations. The structured loops and subroutines are designed to have a high degree of local integrity (Odell, 1999). The concept of objects and agents are the key to understand the OOP and AOP, respectively.
In (Silva et al., 2003) an agent is defined as an extension of an object with additional features, because it extends the definition of state and behavior associated with objects. The mental states consist of its states and behaviors. The beliefs and goals, plans, actions are equivalent to the object’s state and agent’s behaviors, respectively. Moreover, the behavior of an agent extends the behavior of objects because the agents have freedom to control and change their behaviors. They also not require external stimuli to carry out their jobs. These make the agents active elements and objects passive ones.
Agents use some degree of unpredictable behavior. For example, the ants appear to be taking a random walk when they are trying to find food. Their behavior starts to become predictable when the pheromones or food are detected. Therefore, an agent can range from being totally predictable to completely unpredictable. On the other hand, the objects do not have to be completely predictable (Odell, 2002).
The agent has the ability to communicate with the environment and other entities. In MAS the agents are autonomous and has the characteristics to interact with the environments and other agents. However, the object messages have the most basic form of interaction. Also, it request via message only one operation formatted in a very exacting way. The oriented-object message broker has the job of matching each message to exactly one method invocation for exactly one object.
In the communication between agents in the MAS is allowed to use the method invocation of OOP. However, the demand of messages are greater than those used by objects technology. An agent message could consist of a character string whose form can vary, yet obeys a formal syntax, while the conventional object-oriented method must contain parameters whose number and sequence are fixed. Agents may engage in multiple transactions concurrently, through the use of multiple threads or similar mechanisms. Conventional OOP have difficulty to support such requirements (Odell, 2002). But it is possible for the agents to employ objects for situations that require a little autonomous or interactive ability. In the MAS environment, an Agent Communication Language (ACL) is necessary to send a message to any agent. KQML (Group et al., 1992) and FIPA ACL (Buckle & Hadingham, 2000) are examples of the ACLs.
Rentsch (Rentsch, 1982) predicted in 1982 that the OOP would be in 1980s what the structured programming was in the 1970s. The earliest precursor of MAS was OOP. The subsequent OOP evolution provide methods and techniques to identify the objects and their attributes needed to implement the software, to describe the associations between the identified objects, to define the behavior of the objects by describing the function implementations of each object, to refine objects and organize classes by using inheritance to share common structure are the challenges of object-oriented analysis and design (Wahono, 2001). In OOP, an object is a single computational process maintaining its own data structures and procedures (Sawyer, 2003). Also, it maintains the segments of code (methods) and gain local control over the variables manipulated by its methods. In traditional OOP, the objects are passive because their methods are invoked only when some external entity sends them a message. The basic element used in the OOP is the class. A class definition specifies the class variables of an object and the methods that the object accepts. One class inherits from another class such that the new class is an extension of the existing class, instances of two classes collaborates with each other by exchanging messages (Lind, 2000).
In Wahono (2001), the object is defined as the principal building blocks of OOP. Each object is a programming unit consisting of attributes (instance variables) and behaviors (instance methods). An object is a software bundle of variables and related methods. It is easy to see many examples of real-world objects. Also, it is possible to represent the real-world objects using software objects. For example, bicycles have attributes (gear, pedal cadence, two wheels) and behaviors (braking, accelerating, slowing down). A software object that modeled our real-world bicycle would have variables that indicate the bicycle’s current attribute: its speed is 10 mph, its pedal cadence is 90 rpm, and its current gear is the 5th gear. These variables and methods are formally known as instance variables and instance methods to distinguish them from class variables and class methods.
The MAS is considered as an object-oriented system that is associated to an intelligent meta-system. By this way, an agent is viewed as an object that has a layer of intelligence, comprising a number of capabilities such as uniform communication protocol, perception, reaction and deliberation, all of them not inherent to objects. However, the AOP has code, states and agent invocations. The agents also have individual rules and goals to make them appear like active objects with initiative. In AOP the class is replaced by role, state variable with belief/knowledge and method with message. The role definitions describe the agent capability and the information needed to desired results. In order to the agents act with intelligence in their environment, the idea is to develop the complex entities and provide the agents with the knowledge and beliefs to be able to achieve their desires.
Table 1. Relation between OOP and AOP.
Table 1 summarizes the major features of the relation between OOP and AOP. In short, AOP is seen as an extension of OOP. On the other hand, OOP can be viewed as a successor of structured programming. Wagner (2003) defines two main characteristics about the AOP. First, while the state of an object in OOP has no generic structure, the state of an agent in AOP consists of mental components such as beliefs and commitments. Second, while messages in OOP are code in an application-specific ad-hoc manner, a message in AOP is coded as a speech act according to the standard Agent Communication Language that is application-independent.
The autonomy and interaction are the key areas to differentiate the AOP from OOP. The following list describes some underlying concepts that agent-based systems employ (Odell, 2002):
• Decentralization: the objects are centrally organized, because the objects methods are invoked under the control of other components in the system. On the other hand, the agent has a centralized and decentralized processing;
• Multiple and dynamic classification: in OOP, objects are created by a class and, once created, may never change their class or become instances of multiple classes (except by inheritance). However, the agents provide a more flexible approach;
• Small in impact: the objects and agents can be described as small grained or large grained. Also, in comparison with the whole system the agent or object can be small. In an agent-based supply chain, if a supplier or a buyer is lost, the collective dynamics can still dominate. If an object is lost in a system, an exception is raised;
• Emergence: the ant colonies have emergent qualities where groups of agents behave as a single entity. Each consists of individual agents acting according to their own rules and even cooperating to some extent. In MAS, simple rules produce emergence. Since traditional objects do not interact without a higher-level thread of control, emergence does not usually occur. As more agents become decentralized, their interaction is subject to emergence.
More: Multi-agent systems
In this article you'll find a basic example of this Java interface and it will introduce you to the use of the @Async Spring, EJB, annotation
You can find it at: Future interface basics
You can find a whitepaper of blockchain's project Bletchley, with good definitions
This example tries to solve the following problem: We have a NxN checker. Every (i,j) case contains k points (in this example k=4). One entry point that is a point located into the case's boundary. Two points are inside the case and the last point is an exit point, also at the boundary (different from entry). The algorithm uses Branch & Bound technique to: given an initial case and an end case, compute the minimal cost path from init to end. It also uses visitor pattern to get results. Implementation needs to be finished in some points.
Main library: "pathfinder.h"
#ifndef PATHFINDER_H_INCLUDED #define PATHFINDER_H_INCLUDED #include "visitor.h" #include
Getting results (visitor pattern): "visitor.h"
#ifndef VISITOR_H_INCLUDED #define VISITOR_H_INCLUDED #include "pathfinder.h" #includeclass SolutionDisplay { public: SolutionDisplay(Solution * sol); ~SolutionDisplay(); virtual void print(); private: Solution * _solution; }; class Visitor { public: Visitor(); ~Visitor(); virtual void visit(Backtracking * b) virtual void printSolution(); virtual vector * getSolution(); virtual double getCost(); private: Solution * _solution; SolutionDisplay * _solutionDis; }; void Visitor::printSolution(){ this->_solutionDis = new SolutionDisplay(this->_solution); this->_solutionDis->print(); } class PathFinderVisitor: public Visitor { public: PathFinderVisitor(); ~PathFinderVisitor(); private: }; void PathFinderVisitor::visit(Backtracking * backtracking) { if(!backtracking->compute()) this->_solution = backtracking->getSolution(); } #endif // VISITOR_H_INCLUDED