Wednesday 17 July 2013

C# versus C++

Although it has some elements derived from Visual Basic and Java, C++ is C#'s closest relative.
In an important change from C++, C# code does not require header files. All code is written inline.
As touched on above, the .NET runtime in which C# runs performs memory management, taking care of tasks like garbage collection. Because of this, the use of pointers in C# is much less important than in C++. Pointers can be used in C#, where the code is marked as 'unsafe', but they are only really useful in situations where performance gains are at an absolute premium.
 
Speaking generally, the 'plumbing' of C# types is different from that of C++ types, with all C# types being ultimately derived from the 'object' type. There are also specific differences in the way that certain common types can be used. For instance, C# arrays are bounds checked unlike in C++, and it is therefore not possible to write past the end of a C# array.
C# statements are quite similar to C++ statements. To note just one example of a difference: the 'switch' statement has been changed so that 'fall-through' behavior is disallowed.
 
As mentioned above, C# gives up on the idea of multiple class inheritance. Other differences relating to the use of classes are: there is support for class 'properties' of the kind found in Visual Basic, and class methods are called using the Operator rather than the :: operator.

C# versus Java

C# and Java are both new-generation languages descended from a line including C and C++. Each includes advanced features, like garbage collection, which remove some of the low level maintenance tasks from the programmer. In a lot of areas they are syntactically similar.
 
Both C# and Java compile initially to an intermediate language: C# to Microsoft Intermediate Language (MSIL), and Java to Java bytecode. In each case the intermediate language can be run - by interpretation or just-in-time compilation on an appropriate 'virtual machine'. In C#, however, more support is given for the further compilation of the intermediate language code into native code.
 
C# contains more primitive data types than Java, and also allows more extension to the value types. For example, C# supports 'enumerations', type-safe value types which are limited to a defined set of constant variables, and 'structs', which are user-defined value types.
 
Unlike Java, C# has the useful feature that we can overload various operators.
Like Java, C# gives up on multiple class inheritance in favour of a single inheritance model extended by the multiple inheritances of interfaces. However, polymorphism is handled in a more complicated fashion; with derived class methods either 'overriding' or 'hiding' super class methods
 
C# also uses 'delegates'-type-safe method pointers. These are used to implement event handling.
 
In Java, multi-dimensional arrays are implemented solely with single-dimensional arrays (Where arrays can be members of other arrays). In addition to jagged arrays, however, C# also implements genuine rectangular arrays. 


Tuesday 9 July 2013

Do you know this great information?


Type of .NET Languages

-->
To help create languages for the .NET Framework, Microsoft created the Common Language Infrastructure specification (CLI). The CLI describes the features that each language must provide in order to use the .NET Framework and comm6n language runtime and to interoperate with components written in other languages. If a language implements the necessary functionality, it is said to be .NET-compliant.

The .NET Framework was developed so that it could support a theoretically infinite number of development languages. Currently, more than 20 development languages work with the .NET Framework. C# is the programming language specifically designed for the .NET platform, but C++ and Visual Basic have also been upgraded to fully support the .NET framework. The following are the commonly used languages provided by the Microsoft:

• VC++
• VB.NET
.C#
• J#
• JScript .NET
 
Many third parties are writing compilers for other languages with .NET support. With CLR, Microsoft has adopted a much liberal policy. Microsoft has them selves evolved/ developed/ modified many of their programming languages which compliant with .NET CLR.

VC++

Although Visual C++ (VC++) , has undergone changes to incorporate .NET; yet VC++ also maintains its status being a platform dependent programming. Many new MFC classes have been added a programmer can choose between using MFC and compiling the program into a platform specific executable file; or using .NET framework classes and compile into platform independent MISL file. A programmer can also specify (via directives) when ever he uses "unsafe" (the code that by passes CLR, e.g. the use of pointers) code.

VB.NET

Out of ALL .NET languages, Visual Basic.NET (VB.NET) is one language that has probably undergone into the most of changes. Now VB.NET may be considered a complete Object- Oriented Language (as opposed to its previous "Half Object Based and Half Object Oriented" status).
Visual Basic .NET provides substantial language innovations over previous versions of visual basic. Visual Basic .NET supports inheritance, constructors, polymorphism, constructor overloading, structured exceptions, stricter type checking, free threading, and many other features. There is only one form of assignment: noLet of set methods. New rapid application development (BAD) features, such as XML Designer, Server Explorer, and Web Forms designer, are available in Visual Basic from Visual Studio .NET. With this release, Visual Basic Scripting Edition provides full Visual Basic functionality.

C#

Microsoft has also developed a brand new programming language C# (C Sharp). This language makes full use of .NET. It is a pure object oriented language. A Java programmer may find most aspects of this language which is identical to Java. If you are a new comer to Microsoft Technologies - this language is the easiest way to get on the .NET band wagon. While VC++ and VB enthusiast would stick to VC.NET and VB.NET; they would probably increase their productivity by switching to C#. C# is developed to make full use of all the intricacies of .NET. The learning curve of C# for a Java programmer is minimal. Microsoft has also come up with a The Microsoft Java Language Conversion Assistant-which is a tool that automatically converts existing Java-language source code into C# for developers who want to move their existing applications to the Microsoft .NET Framework.

J#

Microsoft has also developed J# (Java Sharp). C# is similar to Java, but it is not entirely' identical. It is for this reason that Microsoft has developed J# - the syntax of J# is identical to Visual J++. Microsoft's growing legal battle with Sun, over Visual J++ - forced Microsoft to discontinue Visual J++. So J# is Microsoft's indirect continuation of Visual J++. It has been reported that porting a medium sized Visual J++ project, entirely to J# takes only a few days of effort.

JScript.NET

Jscript.NET is rewritten to be fully .NET aware. It includes support for classes, inheritance, types and compilation, and it provides improved performance and productivity features. JScript.NET is also integrated with visual Studio .NET. You can take advantage of any .NET Framework class in JScript .NET.

Third-party languages

Microsoft encourages third party vendors to make use of Visual Studio. Net. Third, party vendors can write compilers for different languages ~ that compile the language to MSIL
(Microsoft Intermediate Language). These vendors need not develop their own development environment. They can easily use Visual Studio.NET as an IDE for their .NET compliant language. A vendor has already produced COBOL.NET that integrates with Visual Studio.NET and compiles into MSIL. Theoretically it would then be possible to come up with Java compiler that compiles into MSIL, instead of Java Byte code; and uses CLR instead of JVM. However Microsoft has not pursued this due to possible legal action by Sun.
Several third party languages are supporting the .NET platform. These languages include APL, COBOL, Pascal, Eiffel, Haskell, ML, Oberon, Perl, Python, Scheme and Smalltalk.

-->

What is Garbage Collection

When you initialize a variable using the new operator, you are in fact asking the compiler to provide you some memory space in the heap memory. The compiler is said to "allocate" memory for your variable. When that variable is no longer needed, such as when your program closes, it (the variable) must be removed from memory and the space it was using can be made available to other variables or other programs. This is referred to as garbage collection. In the past, namely in C/C++, this was a concern for programmers because they usually had to remember to manually delete such a variable (a pointer) and free its memory.

The .NET Framework solves the problem of garbage collection by letting the compiler "clean" memory after you. This is done automatically when the compiler judges it necessary so that the programmer doesn't need to worry about this issue.
Garbage collection is a mechanism that allows the computer to detect when an object can no longer is accessed. It then automatically releases the memory used by that object (as well as calling a clean-up routine, called a "finalizer," which is written by the user). Some garbage collectors like the one used by .NET, compact memory and therefore decrease your program's working set.
For most programmers, having a garbage collector (and using garbage collected objects) means that you never have to worry about deallocating memory, or reference counting objects, even if you use sophisticated .data .structures. It does require some changes in coding style, however, if you typically deallocate system resources (file handles, locks, and so forth) in the same block of code that releases the memory for an object. With a garbage collected object you should provide a method that releases the system resources deterministically (that is, under your program control) and let the garbage collector release the memory: when it compact the working set.


What is Common Language Infrastructure (CLI)

The Common Language Infrastructure (CLI) is an open specification developed by Microsoft that describes the executable code and runtime environment that allows multiple high-level languages to be used on different computer platforms without being rewritten for specific architectures. CLR is Microsoft Commercial implementation of Common Language Infrastructure (CLI).

The Common Language Infrastructure (CLI) is a theoretical model of a development platform that provides a device and language independent way to express data and behavior of applications.

While the CLI primarily supports Object Oriented Programming (OOP) languages, procedural and functional languages are also supported. Through the CLI, languages can interoperate with each other and make use of a built-in garbage collector, security system, exception support, and a powerful framework.

  

What is Just in Time Compiler(JIT)

Machines cannot run MSIL directly. JIT compiler turns MSIL into native code, which is CPU specific code that runs on the same computer architecture as the JIT compiler. Because the common. Language runtime supplies a JIT compiler for each supported CPU architecture, developers can write a set of MSIL that can be JIT-compiled and run on computers with different architectures.

However, your managed code will run only on a specific operating system if it calls platform specific native APIs, or a platform-specific class library.
JIT compilation takes into account the fact that some code might never get called during execution. Rather than using time and memory to convert all the MSIL in a portable executable (PE) file to native code, it converts the MSIL as needed during execution and stores the resulting native code so that it is accessible for subsequent calls.
The loader creates and attaches a stub to each of a type's methods when the type is loaded. On the initial call to the method, the stub passes control to the JIT compiler, which converts the MSIL for that method into native code and modifies the stub to direct execution· to the location of the native code. Subsequent calls of the JIT -compiled method proceed directly to the native code that was previously generated, reducing the time it takes to JIT-compile and run the code.

Managed and Unmanaged Code

Managed code is code that is written to target the services of the common language runtime. In order· to target these services, the code must provide a minimum level of information (metadata) to the runtime. All C#, Visual Basic .NET, and JScript .NET code is managed by default. Visual Studio .NET C++ code is not managed by default, but the compiler can produce managed code by specifying a command-line switch (/CLR).


What is Visual Studio .NET

The following is the list of some of the features of Visual Studio .NET:
 
1. Visual studio automates the step required to compile source code.
 
2. The Visual Studio text editor is very intelligent; it can detect errors and suggests code as appropriate as you required.
 
3. The Visual Studio designer for Windows Forms and Web Forms applications, allowing simple Drag and drop design of User Interface elements.
 
4. The Visual Studio contains many powerful tools for visualizing and navigating through the elements of our projects, whether they are C# code files or other resources such as bitmap images or sound files.
 
5. The Visual Studio enables us to use advance debugging techniques when developing projects, such as ability to step through code one instruction at a time while keeping an eye on the state of our application. 


What is Common Type System (CTS)

The language interoperability, and .NET Class Framework, are not possible without all the language sharing the same data types. What this means is that an "int" should mean the same in VB, VC++, C# and all other .NET compliant languages. Same idea follows for all the other data types. This is achieved through introduction of Common Type System (CTS).

Common type system (CTS) is an important part of the runtimes support for cross language integration. The common type system performs the following functions:
• Establishes a framework that enables cross-language integration, type safety, and high performance code execution.
• Provides an object-oriented model that supports the complete implementation of many programming languages.
The common type system supports two general categories of types:

1. Value types

Value types directly contain their data, and instances of value types are either allocated on the stack or allocated inline in a structure. Value types can be built-in, user-defined or enumerations types.

2. Reference types

Reference types stores a reference to the value's memory address, and are allocated on the heap. Reference types can be self-describing types, pointers types, or interface types. The type of a reference type can be determined from values of self-describing types. Self-describing types are further split into arrays and class types are user-defined classes, boxed value types, and delegates. 


What is Microsoft Intermediate Language (MSIL)

A .NET programming language (C#, VB.NET, J# etc.) does not compile into executable code; instead it compiles into an intermediate code called Microsoft Intermediate Language (MSIL). As a programmer one need not worry about the syntax of MSIL - since our source code in automatically converted to MSIL. The MSIL code is then send to the CLR (Common Language Runtime) that converts the code to machine language which is then run on the host machine.

MSIL is similar to Java Byte code. A Java program is compiled into Java Byte code (the .class file) by a Java compiler, the class file is then sent to JVM which converts it into the host machine language.


What is Dot Net Framework

The .NET Framework is the heart of Microsoft .NET. The .NET Framework is a software development platform of Microsoft .NET. Like any platform, it provides a runtime, defines functionality in some libraries, and supports a set of programming languages. The .NET Framework provides the necessary compile-time and run-time foundation to build and run .NET-based applications.

Difference between Procedural and Object Oriented Programming

The different languages reflect the different styles of programming. Procedural programming decomposes a program into various different functional units, each of which can gather and manipulate data as needed. 

Object-oriented programming, on the other hand, decomposes a program into various different data-oriented units or other conceptual units; each unit contains data and various operations that may be performed on that data. Procedural programming forced developers to write highly interdependent code.  

 We can summarize the differences as follows :
 
Procedural Programming
 
– top down design
– create functions to do small tasks
– communicate by parameters and return values
 
Object Oriented Programming
 
– design and represent objects
– determine relationships between objects
– determine attributes each object has
– determine behaviors each object will respond to
– create objects and send messages to them to use or manipulate their attributes

Saturday 6 July 2013

What is stack - peak concepts

An important subclass of lists permits the insertion and deletion of an element to occur only at one end. A linear list of this type is known as ‘stack’.

The insertion is referred to as ‘push’. The deletion is referred to as ‘pop’. The two pointers used for accessing is top & bottom pointer.
 
PUSH – Storing the element into the stack.
Check top<= allowed size if yes increment the top position and store the value in the top position.
 
POP -  Deleting the element from the stack. If top<= we can not delete.
Otherwise decrement the top by one and return the top+1 element. 


What is Binary Search - peak concepts

In a linear search the search is done over the entire list even if the element to be searched is not available. Some of our improvements work to minimize the cost of traversing the whole data set, but those improvements only cover up what is really a problem with the algorithm.

By thinking of the data in a different way, we can make speed improvements that are much better than anything linear search can guarantee. Consider a list in sorted order. It would work to search from the beginning until an item is found or the end is reached, but it makes more sense to remove as much of the working data set as possible so that the item is found more quickly.  



What is bubble sort - peak concepts

Bubble sort: This technique compares last element with the preceding element. If the last element is less than that of preceding element swapping takes place. Then the preceding element is compared with that previous element. This process continuous until the II and I elements are compared with each other. This is known as pass 1.
  


What is Selection Sort - peak concepts

Selection sort: In this technique, the first element is selected andcompared with all other elements. If any other element is less thanthe first element swapping should take place.By the end of this comparison, the least element most top position in the array. This is known as pass1. In pass II, the second element is selected and compared with all other elements. Swapping takes place if any other element is less than selected element. This process continuous until array is sorted.
 
The no. of passes in array compare to size of array –1.


Types of sorting - peak concepts

Insertion sort.
In this method, sorting is done by inserting elements into an existing sorted list. Initially, the sorted list has only one element. Other elements are gradually added into the list in the proper position.

 
Merge Sort.
In this method, the elements are divided into partitions until each partition has sorted elements. Then, these partitions are merged and the elements are properly positioned to get a fully sorted list.


Quick Sort.
In this method, an element called pivot is identified and that element is fixed in its place by moving all the elements less than that to its left and all the elements greater than that to its right.


Radix Sort.
In this method, sorting is done based on the place values of the number. In this scheme, sorting is done on the less-significant digits first. When all the numbers are sorted on a more significant digit, numbers that have the same digit in that position but different digits in a less-significant position are already sorted on the less-significant position.


Heap Sort
In this method, the file to be sorted is interpreted as a binary tree. Array, which is a sequential representation of binary tree, is used to implement the heap sort.
The basic premise behind sorting an array is that its elements start out in some random order and need to be arranged from lowest to highest.
It is easy to see that the list
1, 5, 6, 19, 23, 45, 67, 98, 124, 401
is sorted, whereas the list
4, 1, 90, 34, 100, 45, 23, 82, 11, 0, 600, 345
is not. The property that makes the second one "not sorted" is that there are adjacent elements that are out of order. The first item is greater than the second instead of less, and likewise the third is greater than the fourth and so on. Once this observation is made, it is not very hard to devise a sort that proceeds by examining adjacent elements to see if they are in order, and swapping them if they are not.
 




What is Sorting - [eak concepts

Sorting refers to ordering data in an increasing or decreasing fashion according to some linear relationship among the data items.

Sorting can be done on names, numbers and records. Sorting reduces the For example, it is relatively easy to look up the phone number of a friend from a telephone dictionary because the names in the phone book have been sorted into alphabetical order.
 
This example clearly illustrates one of the main reasons that sorting large quantities of information is desirable. That is, sorting greatly improves the efficiency of searching. If we were to open a phone book, and find that the names were not presented in any logical order, it would take an incredibly long time to look up someone’s phone number. 


What is Circular Linked Lists - peak concepts

In a circularly-linked list, the first and final nodes are linked together. In another words, circularlylinked lists can be seen as having no beginning or end. To traverse a circular linked list, begin at any node and follow the list in either direction until you return to the original node. 

This type of list is most useful in cases where you have one object in a list and wish to see all other objects in the list. The pointer pointing to the whole list is usually called the end pointer.



What is Doubly Linked List - peak concepts

A more sophisticated kind of linked list is a doubly-linked list or a two-way linked list. In a doubly linked list, each node has two links: one pointing to the previous node and one pointing to the next node.

What is Singly Circular Linked List - peak concepts

The advantage of using Circular Linked List is the last null pointer is replaced and the pointer field of the last node points to the first node, due to this circular arrangement the traversing become quite easier. 

The insertion and deletion in the first and middle are same as singly linked list except the last node.

Insertion


  • Insertion in the last node
 
To insert a node in the last position, insert the new node after the current last node, and then change the pointer field of the new node to point to the first node. Let the last node be last, the new node to be inserted to be new, the first node in the list to be first. The pointers used are ‘data’ for the data field, ‘next’ to the pointer field, the data to be inserted is ‘X ’then the insertion is
                                    
                                     Last ­? next = new
                                    New ? next =first 

 Deletion
 
  • Deletion in the last node
 
To delete a node in the last position, change the pointer field of the previous node to the current last to point the first node. Let the last node be last, the previous node to the current last node to be pre, the first node in the list to be first. The pointers used are ‘data’ for the data field, ‘next’ to the pointer field, the data to be inserted is ‘X ’then the deletion is
                                  
                                      Prev ? next = first 



What is the purpose of realloc( ) - peak concepts

The function realloc(ptr,n) uses two arguments.the first argument ptr is a pointer to a block of memory for which the size is to be altered. The second argument n specifies the new size. The size may be increased or decreased. 

If n is greater than the old size and if sufficient space is not available subsequent to the old region, the function realloc( ) may create a new region and all the old data are moved to the new region.

How can we analyse an Algorithm - peak concepts

Analysis of Algorithms (AofA) is a field in computer science whose overall goal is an understanding of the complexity of algorithms. While an extremely large amount of research is devoted to worst-case evaluations, the focus in these pages is methods for average-case and probabilistic analysis. Properties of random strings, permutations, trees, and graphs are thus essential ingredients in the analysis of algorithms.

To analyze an algorithm is to determine the amount of resources (such as time and storage) necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length. Usually the efficiency or complexity of an algorithm is stated as a function relating the input length to the number of steps (time complexity) or storage locations (space complexity).

Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search of efficient algorithms.

  

What is the difference between realloc() and free() - peak concepts

The free subroutine frees a block of memory previously allocated by the malloc subroutine. Undefined results occur if the Pointer parameter is not a valid pointer. If the Pointer parameter is a null value, no action will occur. 

The realloc subroutine changes the size of the block of memory pointed to by the Pointer parameter to the number of bytes specified by the Size parameter and returns a new pointer to the block.
 
The pointer specified by the Pointer parameter must have been created with the malloc, calloc, or realloc subroutines and not been deallocated with the free or realloc subroutines. Undefined results occur if the Pointer parameter is not a valid pointer.


What are the various steps to plan Algorithm

(1) Device Algorithm : Creating an algorithm is an art in which may never be fully automated. When we get the problem, we should first analyse the given problem clearly and then write down some steps on the paper.
  
(2) Validate Algorithm : Once an algorithm is devised , it is necessary to show that it computes the correct answer for all possible legal inputs . This process is known as algorithm validation. The algorithm need not as yet be expressed as a program. It is sufficient to state it in any precise way. The purpose of validation is to assure us that this algorithm will work correctly independently of the issues concerning the programming language it will eventually be written in. Once the validity of the method has been shown, a program can be written and a second phase begins. This phase is referred to as program proving or program verification. 

(3) Analyse Algorithm : As an algorithm is executed , it uses the computers central processing unit to perform operations and its memory ( both immediate and auxiliary) to hold the program and data. Analysis of algorithm or performance analysis refers to the task of determining how much computing time and storage an algorithm requires. An important result of this study is that it allows you to make quantitative judgments about the value of one algorithm over another. Another result is that it allows you to predict whether the software will meet any efficiency constraints that exist. Analysis can be made by taking into consideration.
  
(4) Test A Program : Testing a program consists of 2 phases : debugging and performance management. Debugging is the process of executing programs on sample data sets to determine whether results are incorrect if so corrects them. Performance management is the process of executing a correct program on data sets and measuring the time and space it takes to compute the results. These timing figures are useful in that they may confirm a previously done analysis and point out logical places to perform useful optimization.
 


What is heap - peak concepts

The heap is where malloc(), calloc(), and realloc() get memory. Getting memory from the heap is much slower than getting it from the stack. On the other hand, the heap is much more flexible than the stack. Memory can be allocated at any time and deallocated in any order. Such memory isn't deallocated automatically; you have to call free().

Recursive data structures are almost always implemented with memory from the heap. Strings often come from there too, especially strings that could be very long at runtime. If you can keep data in a local variable (and allocate it from the stack), your code will run faster than if you put the data on the heap. Sometimes you can use a better algorithm if you use the heap faster, or more robust, or more flexible. Its a tradeoff.

If memory is allocated from the heap, its available until the program ends. That's great if you remember to deallocate it when you're done. If you forget, it's a problem. A ?memory leak is some allocated memory that's no longer needed but isn't deallocated. If you have a memory leak inside a loop, you can use up all the memory on the heap and not be able to get any more. (When that happens, the allocation functions return a null pointer.) In some environments, if a program doesn't deallocate everything it allocated, memory stays unavailable even after the program ends.
  

List out the areas in which data structures are applied extensively - peak concepts

  1. Compiler Design
  2. Operating System
  3. Database Management System
  4. Statistical analysis package
  5. Numerical Analysis
  6. Graphics
  7. Artificial Intelligence
  8. Simulation

What is data structure - peak concepts

A data structure is a way of organizing data that considers not only the items stored, but also their relationship to each other. Advance knowledge about the relationship between data items allows designing of efficient algorithms for the manipulation of data.



Monday 1 July 2013

What is bottom-up implementation - peak concepts



In a bottom-up implementation, the process is the reverse. The development starts with implementing the modules at the bottom of the hierarchy and proceeds through the higher levels until it reaches the top.  


What is top down implementation - peak concepts



In a top-down implementation, the implementation starts from the top of the hierarchy and proceeds to the lower levels. First the main module is implemented, then its subordinates are implemented, and their subordinates, and so on. 

What is Spiral model - peak concepts


This model is organized like a spiral that has many cycles. Each cycle in the spiral begins with the identification of objectives for that cycle the different alternatives that are possible for achieving the objectives, and the constraints that exist. 

The spiral model is a risk-driven approach to software development that encompasses the best features of both classic life cycle and prototyping. In A quadrant different levels of planning are performed.

In B quadrant a thorough risk analysis is done and an appropriate prototype is initiated. In the C quadrant various software development products are sequentially completed. In the D quadrant client and management evaluate these products and provide permission to   continue to the next level of the spiral. Spiral goes from A to B, from B to C, from C to D and from D back to A until the complete system is developed and accepted.


What is Cohesion - peak concepts


Cohesion: Cohesion is the concept that tries to capture this intra-module. With cohesion we are interested in determining how closely the elements of a module are related to each other. Cohesion of a module represents how tightly bound the internal elements of the module are to one another. Cohesion of a module gives the designer an idea about whether the different elements of a module belong together in the same module.  Cohesion and coupling are clearly related. Usually the greater the cohesion of each module in the system, the lower the coupling between modules is. There are several levels of Cohesion:
 
-              Coincidental
-              Logical
-              Temporal
-              Procedural
-              Communicational
-              Sequential
-              Functional
 
Coincidental is the lowest level, and functional is the highest. Coincidental Cohesion occurs when there is no meaningful relationship among the elements of a module. Coincidental Cohesion can occur if an existing program is modularized by chopping it into pieces and making different pieces modules.  


What is Coupling - peak concepts


Coupling: Two modules are considered independent if one can function completely without the presence of other. Obviously, if two modules are independent, they are solvable and modifiable separately. However, all the modules in a system cannot be independent of each other, as they must interact so that together they produce the desired external behavior of the system. 

The more connections between modules, the more dependent they are in the sense that more knowledge about one module is required to understand or solve the other module. Hence, the fewer and simpler the connections between modules, the easier it is to understand one without understanding the other. Coupling between modules is the strength of interconnection between modules or a measure of independence among modules.
 
To solve and modify a module separately, we would like the module to be loosely coupled with other modules. The choice of modules decides the coupling between modules. Coupling is an abstract concept and is not easily quantifiable. So, no formulas can be given to determine the coupling between two modules. However, some major factors can be identified as influencing coupling between modules.
 
Among them the most important are the type of connection between modules, the complexity of the interface, and the type of information flow between modules. Coupling increase with the complexity and obscurity of the interface between modules. To keep coupling low we would like to minimize the number of interfaces per module and the complexity of each interface. An interface of a module is used to pass information to and from other modules. Complexity of the interface is another factor affecting coupling.
 
The more complex each interface is, higher will be the degree of coupling. The type of information flow along the interfaces is the third major factor-affecting coupling. There are two kinds of information that can flow along an interface: data or control, Passing or receiving control information means that the action of the module will depend on this control information, which makes it more difficult to understand the module and provide its abstraction. Transfer of data information means that a module passes as input some data to another module and gets in return some data as output.

 

Characteristics of an SRS - peak concepts


1.       Correct
2.       Complete
3.       Unambiguous
4.       Verifiable
5.       Consistent
6.       Ranked for importance and/or stability
7.       Modifiable
8.       Traceable 


Advantages of SRS - peak concepts

Software SRS establishes the basic for agreement between the client and the supplier on what the software product will do.

1.    A SRS provides a reference for validation of the final product.
2.    A high-quality SRS is a prerequisite to high-quality software.
3.    A high-quality SRS reduces the development cost.
  

What is SRS - peak concepts


Software requirement specification (SRS) is a document that completely describes what the proposed software should do without describing how software will do it. The basic goal of the requirement phase is to produce the SRS, Which describes the complete behavior of the proposed software. SRS is also helping the clients to understand their own needs.

What is Error Correction and Detection - peak concepts

Error detection and correction has great practical importance in maintaining data (information) integrity across noisy Communication Networks channels and lessthan- reliable storage media.
Error Correction : Send additional information so incorrect data can be corrected and accepted. Error correction is the additional ability to reconstruct the original, error-free data.
 
There are two basic ways to design the channel code and protocol for an error correcting system :
Automatic Repeat-Request (ARQ) : The transmitter sends the data and also an error detection code, which the receiver uses to check for errors, and request retransmission of erroneous data. In many cases, the request is implicit; the receiver sends an acknowledgement (ACK) of correctly received data, and the transmitter re-sends anything not acknowledged within a reasonable period of time.
 
Forward Error Correction (FEC) : The transmitter encodes the data with an error-correcting code (ECC) and sends the coded message. The receiver never sends any messages back to the transmitter. The receiver decodes what it receives into the "most likely" data. The codes are designed so that it would take an "unreasonable" amount of noise to trick the receiver into misinterpreting the data.