Skip to main content

Fundamental terminology in coding (programming) part 2

6

Fundamental terminology in coding (programming) part 1




Before starting real learning coding  we  must know some basics terminology in coding (programming)

Fundamental terminology in coding (programming) part 1

Coding not only doesn’t lend itself to someone just physically showing you to “do this” or “do that” but most of the terms that come with it is new terminology, and are words never seen before—or words that have been seen, but now have different meanings.

Irrespective of the programming language you choose to learn, the basic concepts of programming are similar across languages. Some of these concepts include:

1.Variable declaration

Variables are containers for storing data values, a memory location for a data type. 
Variables are created using a declaration or keyword that varies across languages.
Variable names are usually alphanumeric, that is, they contain a-z and 0-9.
 They can also include special characters like underscore or the dollar sign.
Variables can hold values of any data type supported by the programming language. 
This value may change during program execution.
examples 
Int x = 12 here int is data type  hold values of integer data type and x is Variables store values 12 in memory of computer 

2.Data Types

When coding across programming languages, there are many common data types that software developers can use. 

These data types can determine how much memory a computer needs to process the code, how long it might take to load certain features and what functions a program might perform.

What is a data type?

A data type is an attribute of a piece of data that tells a device how the end-user might interact with the data. 

You can also think of them as categorizations that different coding programs might combine in order to execute certain functions. 

Most programming languages including C++ and Java use the same basic data types




3.Algorithm

In computer programming terms, an algorithm is a set of well-defined instructions to solve a particular problem. It takes a set of input(s) and produces the desired output.

Real-life examples that define the use of algorithms:

Driverless cars are considered to be the future of human transport. They rely on object classification algorithms that help cars to detect objects, interpret situations and make decisions. 

Algorithms can do this because they spend hundreds of thousands of hours learning through navigating on roads. 


An algorithm to add two numbers:

  1. Take two number inputs

  2. Add

  3.  numbers using the + operator

  4. Display the result

Complexity of Algorithm

The term algorithm complexity measures how many steps are required by the algorithm to solve the given problem. It evaluates the order of count of operations executed by an algorithm as a function of input data size.
An algorithm is analyzed using Time Complexity and Space Complexity. Writing an efficient algorithm help to consume the minimum amount of time for processing the logic.
  • Time Complexity: Time taken by the algorithm to solve the problem. It is measured by calculating the iteration of loops, number of comparisons etc.
  • Space Complexity: Space taken by the algorithm to solve the problem. It includes space used by necessary input variables and any extra space (excluding the space taken by inputs) that is used by the algorithm. 

  • Analysis of Algorithms

 worst-case analysis 

we calculate the upper bound on the running time of an algorithm. We must know the case that causes a maximum number of operations to be executed. 

average case 

 we take all possible inputs and calculate the computing time for all of the inputs. Sum all the calculated values and divide the sum by the total number of inputs.
 We must know (or predict) the distribution of cases. 


Best Case 

In the best-case analysis, we calculate the lower bound on the running time of an algorithm. We must know the case that causes a minimum number of operations to be executed.

Asymptotic notation:

The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as some sort of limit is taken).

Asymptotic Notations:

Asymptotic Notation is a way of comparing function that ignores constant factors and small input sizes. Three notations are used to calculate the running time complexity of an algorithm:

Asymptotic Notations types:



1. Big-O notation: Big-O is the formal method of expressing the upper bound of an algorithm's running time. It is the measure of the longest amount of time.

2. O mega () Notation: O mega notation represents the lower bound of the running time of an algorithm. 

3.Theta Notation Θ-notation: represents the upper and the lower bound of the running time of an algorithm.


4.Flowchart


Flowchart is a graphical representation of an algorithm. Programmers often use it as a program-planning tool to solve a problem. 
It makes use of symbols which are connected among them to indicate the flow of information and processing. 

The Flowchart is the most widely used graphical representation of an algorithm and procedural design workflows. 


we see in detail flowchart and other Fundamentals terminology in coding (programming) part 2.(coming soon)


Programming today is a race between software engineers striving to build bigger and better idiot-proof programs and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.

Comments

Post a Comment

Popular posts from this blog

What is coding?

Fundamental terminology in coding (programming) part 2

  Fundamental terminology in coding (programming) part 2