Tech Kaizen

passion + usefulness = success .. change is the only constant in life

Search this Blog:

Effective Code Reviews ...

The purpose of a code review is to ensure a high level of code quality. A code/peer review is where developers go over the code in a system to:



1. Make sure that the code is written to standard and satisfies the specifications, requirements or design documents



2. Suggest improvement opportunities to the author



3. Learn different ways of coding and about the system under review (competence building)



Risk = probability of bug x probability of bug activation x impact of bug activation



What to Review



To get some idea which code to review, think about the following:



1. code that uses new technology, techniques, or tools

2. key architectural components

3. complex logic or algorithms

4. security related code

5. code that has many exception conditions or failure modes

6. exception handling code that cannot easily be tested

7. components that are intended to be reused

8. code that will serve as models or templates for other code

9. code that affect multiple portions of the product

10. complex user interfaces

11. code created by less experienced developers

12. code having high cyclomatic complexity

13. code having a history of many defects or changes



When starting to do code reviews for the first time, core components, base classes, and complex areas should be started with. As the code reviews progress, more coverage is possible by choosing a use case to follow through or selecting a layer. Over time, reviewing code as it is made ready and also picking on any areas that keep having bugs reported in them. However, at no point should there be a target to review 100% of the systems code.



1. While doing Code Reviews:

  • Ask questions rather than make statements: A statement is accusatory. "You didn't follow the standard here" is an attack—whether intentional or not. The question, "What was the reasoning behind the approached you used?" is seeking more information. Obviously, that question can't be said with a sarcastic or condescending tone; but, done correctly, it can often open the developer up to stating their thinking and then asking if there was a better way.
  • Avoid the "Why" questions: Although extremely difficult at times, avoiding the "Why" questions can substantially improve the mood. Just as a statement is accusatory—so is a why question. Most "Why" questions can be reworded to a question that doesn't include the word "Why" and the results can be dramatic. For example, "Why didn't you follow the standards here..." versus "What was the reasoning behind the deviation from the standards here..."
  • Remember to praise: The purposes of code reviews are not focused at telling developers how they can improve, and not necessarily that they did a good job. Human nature is such that we want and need to be acknowledged for our successes, not just shown our faults. Because development is necessarily a creative work that developers pour their soul into, it often can be close to their hearts. This makes the need for praise even more critical.
  • Make sure you have good coding standards to reference: Code reviews find their foundation in the coding standards of the organization. Coding standards are supposed to be the shared agreement that the developers have with one another to produce quality, maintainable code. If you're discussing an item that isn't in your coding standards, you have some work to do to get the item in the coding standards. You should regularly ask yourself whether the item being discussed is in your coding standards.
  • Make sure the discussion stays focused on the code and not the coder: Staying focused on the code helps keep the process from becoming personal. You're not interested in saying the person is a bad person. Instead, you're looking to generate the best quality code possible.
  • Remember that there is often more than one way to approach a solution:Although the developer might have coded something differently from how you would have, it isn't necessarily wrong. The goal is quality, maintainable code. If it meets those goals and follows the coding standards, that's all you can ask for.

What to Do If You're a Developer

The above advice is fine if you're the project or development leader who is organizing the code review, but what if you're the one who has to endure a painful code review? What can you do to make the process less painful if you're the developer who's having your code reviewed?

1. Remember that the code isn't you. Development is a creative process: It's normal to get attached to your code. However, the folks who are reviewing the code generally aren't trying to say that you're a bad developer (or person) by pointing out something that you missed, or a better way of handling things. They're doing what they're supposed to be doing by pointing out better ways. Even if they're doing a bad job of conveying it, it's your responsibility to hear past the attacking comments and focus on the learning that you can get out of the process. You need to strive to not get defensive.

2. Create a checklist for yourself of the things that the code reviews tend to focus on: Some of this checklist should be easy to put together. It should follow the outline of the coding standards document. Because it's your checklist, you can focus on the thing that you struggle with and skip the things that you rarely, if ever, have a problem with. Run through your code with the checklist and fix whatever you find. Not only will you reduce the number of things that the team finds, you'll reduce the time to complete the code review meeting—and everyone will be happy to spend less time in the review.

3. Help to maintain the coding standards: Offer to add to the coding standards for things discussed that aren't in the coding standards. One of the challenges that a developer has in an organization with combative code review practices is that they frequently don't know where the next problem will come from. If you document each issue into the coding standards, you can check for it with your checklist the next time you come up for code reviews. It also will help cement the concept into your mind so that you're less likely to miss opportunities to use the feedback.

Links:

The information above is gathered from the Links below:

Effective Code Reviews Without the Pain - http://www.developer.com/mgmt/article.php/3579756

General Code Review Guidelines - http://openmrs.org/wiki/Code_Review_Checklist

General Code Review Guidelines - http://ncmi.bcm.tmc.edu/homes/lpeng/psp/code/checklist.html

Macadamian's Code Review Checklist - http://www.macadamian.com/index.php?option=com_content&task=view&id=27&Itemid=31

C# code review checklist - http://weblogs.asp.net/tgraham/archive/2003/12/19/44763.aspx

SQL Server Code Review Checklist - http://www.mssqltips.com/tip.asp?tip=1303

HPROF to tune Java Application Performance - http://java.sun.com/developer/TechTips/2000/tt0124.html

2. C++ Code Review CheckList

Classes

1 Does the class have any virtual functions? If so, is the destructor non-virtual?

Classes having virtual functions should always have a virtual destructor. This is

necessary since it is likely that you will hold an object of a class with a pointer of a lessderived

type. Making the destructor virtual ensures that the right code will be run if you

delete the object via the pointer.



2 Does the class have any of the following:

Copy-constructor

Assignment operator

Destructor

If so, it generally will need all three. (Exceptions may occasionally be found for some

classes having a destructor with neither of the other two.)



Deallocating Data

1.Are arrays being deleted as if they were scalars?

delete myCharArray;

should be

delete [] myCharArray;

2. Does the deleted storage still have pointers to it?

It is recommended that pointers are set to NULL following deletion, or to another safe

value meaning "uninitialized." This is neither necessary nor recommended within

destructors, since the pointer variable itself will cease to exist upon exiting.



3. Are you deleting already-deleted storage?

This is not possible if the code conforms to 6.2.2. The draft C++ standard specifies that

it is always safe to delete a NULL pointer, so it is not necessary to check for that value.

If C standard library allocators are used in a C++ program (not recommended):



4.Is delete invoked on a pointer obtained via malloc, calloc, or realloc?



5.Is free invoked on a pointer obtained via new?


Both of these practices are dangerous. Program behavior is undefined if you do them, and such

usage is specifically deprecated by the ANSI draft C++ standard.



Constants

1. Does the value of the variable never change?


int months_in_year = 12;

should be

const unsigned months_in_year = 12;



2. Are constants declared with the preprocessor #define mechanism?

#define MAX_FILES 20

should be

const unsigned MAX_FILES = 20;



3. Is the usage of the constant limited to only a few (or perhaps only one) class?

If so, is the constant global?

const unsigned MAX_FOOS = 1000;

const unsigned MAX_FOO_BUFFERS = 40;



should be



class foo {

public:

enum { MAX_INSTANCES = 1000; }

...

private:

enum { MAX_FOO_BUFFERS = 40; }

...

};



If the size of the constant exceeds int, another mechanism is available:

class bar {

public:

static const long MAX_INSTS;

...

};

const long bar::MAX_INSTS = 70000L;



The keyword static ensures there is only one instance of the variable for the entire

class. Static data items are not permitted to be initialized within the class declaration, so

the initialization line must be included in the implementation file for class bar.

Static constant members have one drawback: you cannot use them to declare member

data arrays of a certain size. This is because the value is not available to the compiler at

the point which the array is declared in the class.

Links

"Code Review Checklist" by Charles Vaz - http://charlesconradvaz.wordpress.com/2006/02/16/code-review-checklist-2/

Code Inspection Check List - http://www.chris-lott.org/resources/cstyle/Baldwin-inspect.pdf

C Code Reviw Guide - http://casper.ict.hen.nl/se/SEscripts/CodeReviewGuide.html

Best Practices: Code Reviews - http://msdn.microsoft.com/en-us/library/bb871031.aspx



3. JAVA Code Review CheckList

Error Handling

1. Does the code comply with the accepted Exception Handling Conventions.

a. We need to expand our notion of Exception Handling Conventions.

b. Some method in the call stack needs to handle the exception, so that we don’t display that exception stacktrace to the end user.

2. Does the code make use of exception handling?

a. Exception handling should be consistent throughout the system.

3.Does the code simply catch exceptions and log them?

a. Code should handle exceptions, not just log them.

4.Does the code catch general exception (java.lang.Exception)?

a. Catching general exceptions is commonly regarded as “bad practice”.

5.Does the code correctly impose conditions for “expected” values?

a.For instance, if a method returns null, does the code check for null?

The following code should check for null

Person person = Context.getPersonService().getPerson(personId);

person.getAddress().getStreet();

What should be our policy for detecting null references?

6.Does the code test all error conditions of a method call?

a. Make sure all possible values are tested.

b.Make sure the JUnit test covers all possible values.

Security

1. Does the code appear to pose a security concern?

a. Passwords should not be stored in the code. In fact, we have adopted a policy in which we store passwords in runtime properties files.

b.Connect to other systems securely – i.e. use HTTPS instead of HTTP where possible.

Thread Safeness

1. Does the code practice thread safeness?

a. If objects can be accessed by multiple threads at one time, code altering global variables (static variables) should be enclosed using a synchronization mechanism (synchronized).

b. In general, controllers / servlets should not use static variables.

c. Use synchronization on the smallest unit of code possible. Using synchronization can cause a huge performance penalty, so you should limit its scope by synchronizing only the code that needs to be thread safe.

d. Write access to static variable should be synchronized, but not read access.

e. Even if servlets/controllers are thread-safe, multiple threads can access HttpSession attributes at the same time, so be careful when writing to the session.

f. Use the volatile keyword to warn that compiler that threads may change an instance or class variable – tells compiler not to cache values in register.

g. Release locks in the order they were obtained to avoid deadlock scenarios.

2. Does the code avoid deadlocks?

a. I’m not entirely sure how to detect a deadlock, but we need to make sure we acquire/release locks in a manner that does not cause contention between threads. For instance, if Thread A acquires Lock #1, then Lock #2, then Thread B should not acquire Lock #2, then Lock #1.

b.Avoid calling synchronized methods within synchronized methods.

Resource Leaks

1. Does the code release resources?

a. Close files, database connections, HTTP connections, etc.

2. Does the code release resources more than once?

a. This will sometimes cause an exception to be thrown.

3. Does the code use the most efficient class when dealing with certain resources?

a. For instance, buffered input / output classes.

Miscellaneous:

1.Make sure that we are using StringBuffer if we want to change the contents of a String



2.Always use “.equals” instead of “==” during Object Comparision



3.Use wait()/notify() instead of sleep()

Links:

Checklist: Java Code Review -
http://snap.uci.edu/viewXmlFile.jsp?resourceID=1529

http://www.javaworld.com/javaworld/javatips/jw-javatip88.html

http://undergraduate.csse.uwa.edu.au/units/CITS2220/assign2/JavaInspectionCheckList.pdf

http://www.cs.toronto.edu/~sme/CSC444F/handouts/java_checklist.pdf

http://www.deaded.com/staticpages/index.php/codereviewprocess



Labels: CODE REVIEWS

Getting Started with PORTING to Mac OS X ...

Mac OS X is a uniquely powerful development platform, supporting multiple development technologies including UNIX, Java, the proprietary Cocoa and Carbon runtime environments, and a host of open source, web, scripting, database, and development technologies.

Darwin is an open source POSIX-compliant computer operating system released by Apple Inc. in 2000. It is composed of code developed by Apple, as well as code derived from NeXTSTEP, BSD, and other free software projects. Darwin forms the core set of components upon which Mac OS X, Apple TV, and iOS are based. It is compatible with the Single UNIX Specification version 3 (SUSv3) and POSIX UNIX applications and utilities.

Carbon is one of Apple Inc.'s procedural application programming interfaces (APIs) for the Macintosh operating system. It provides C programming language access to Macintosh system services. Carbon provides a good degree of backward compatibility for programs to run on the now-obsolete Mac OS 8 and 9, however these systems are no longer actively supported since Apple released the final OS 9 update in December 2001. The development of Mac OS X APIs reflect that of the underlying operating system. Mac OS X is written mostly in C and Objective-C. In particular, Objective-C is ubiquitous in the human interface systems. With Mac OS X v10.5, after a transition where new elements of the Carbon interface specifically referred to the underlying Cocoa system, Apple identified Objective-C and Cocoa as the preferred interface to human interface services. Carbon access to various human interface services in the 64-bit operating environment is not available, and significant new features will not be added to the 32-bit Carbon interface. Most other parts of the system, which have less emphasis on Objective-C, are not so affected.

Xcode is a suite of tools developed by Apple for developing software for Mac OS X and iOS, first released in 2003. The latest stable release is version 4.2.1, which is available on the Mac App Storefree of charge for Mac OS X Lion users (requires an Apple ID).[1] Registered developers can download preview releases and previous versions of the suite through the Apple Developer website. The built-in Xcode Tools, combined with time-tested stability and performance characteristics, standards-based technologies, and remarkable user interface, make Mac OS X an amazingly multifaceted development platform. The current release, Mac OS X v10.4 Tiger, brings developers revolutionary new technologies like Spotlight, Dashboard, Automator, Core Data, Core Image, and many others. These powerful additions to the modern, UNIX-based foundation make Mac OS X Tiger the most advanced operating system available.

MAC OS X is a form of Unix. It is an offshoot of an old FreeBSD. There are differences with the Unixes you are familiar with, but there are many more similarities. It doesn't look like Unix when you turn on the machine, because the GUI is the most obvious thing, but underneath it is Unix -- which you would see by bringing up a terminal window and entering standard Unix shell commands.

When porting products from Unix, remember that Mac IS Unix. When porting from Windows that's different. What would you tell yourself if you were porting from Windows to Unix? They are very different.

Porting the basic code is not that difficult. It uses gcc/clang. But there are some things you need to be aware of.

  • You will, of course, have to become familiar with some different system APIs.
  • There are some different rules about building shared libraries. There are two kinds (bundles and dylibs) which are to be used in different circumstances. Second, though on other Unixes we don't worry about resolving all references when building a shared library (we let the executable take care of that at run time), you can't do that on OS X. All references have to be resolved at build time.
  • Because of the two types of processors(Intel/Power-PC), you need to become familiar with their Universal Binary mechanism. This allows you essentially to build the product for each processor on one machine, and then combine the results for each processor into a single "fat" binary that contains binary code for both in a single file. The result is a product that can be installed and run on either machine (the system figures out which binary code to use), without the user having to use a different version for each processor.
  • If you intend to have any GUI , you really have to make it look enough Mac-like (which is a bit different from other graphical desktops) to make the Mac users accept it, You have to understand the nature of the Mac desktop.
  • You also have to know how Mac products install. Underneath you can use your own script, but there is a Mac installer that people may expect you to use.

Credits:I credit the content of this article to my Colleague Jonathan. Thanks a lot Buddy Jon !!!


Ref:

http://developer.apple.com/

http://developer.apple.com/referencelibrary/GettingStarted/GS_MacOSX/index.html

http://developer.apple.com/macosx/overview.html

Porting UNIX/Linux Applications to Mac OS X - http://developer.apple.com/documentation/Porting/Conceptual/PortingUnix/index.html#//apple_ref/doc/uid/TP30001003

Programming Mac OS X with Cocoa for Beginners - http://en.wikibooks.org/wiki/Programming_Mac_OS_X_with_Cocoa_for_Beginners

Mac Automation made simple - http://itunes.apple.com/podcast/mac-automation-made-simple/id288750552

Apple Script Tutorials - http://www.macosxautomation.com/applescript/firsttutorial/index.html

An Absolute Beginner's Guide to iPhone Development - http://www.switchonthecode.com/tutorials/an-absolute-beginners-guide-to-iphone-development

Labels: MAC OPERATING SYSTEM

XML Overview

1. XML Parsers(DOM,SAX)

Configuration files, application file formats, even database access layers make use of XML-based documents. Fortunately, several high-quality implementations of the standard APIs for handling XML are available. Unfortunately, these APIs are large and therefore provide a formidable hurdle for the beginner.

XML is becoming increasingly popular in the developer community as a tool for passing, manipulating, storing, and organizing information. If you are one of the many developers planning to use XML, you must carefully select and master the XML parser.

The parser—one of XML's core technologies—is your interface to an XML document, exposing its contents through a well-specified API. Confirm that the parser you select has the functionality and performance that your application requires. A poor choice can result in excessive hardware requirements, poor system performance and developer productivity, and stability issues. There are mainly two types of XML API : DOM, SAX

Which parser should I use - DOM or SAX?
The proper choice mostly depends on the requirements of the application. This note lists some of the properties of each of the parsers and tries to give a hand on deciding which one to use.
The DOM parser always reads the whole xml document. It either throws an exception when it encounters an error during parsing, or returns a complete DOM tree as a representation of the xml document.


In contrast, the SAX parser works incrementally and generates events that are passed to the application. An application can receive these events by implementing the abstract methods of the SAX2XMLReader class.


What are the pros and cons?
The DOM parser offers a convenient way for reading, analizing, manipulating and writing back XML files. Since it always reads the whole file, before further processing can take place, using the DOM parser may lead to difficulties when processing huge XML files.

The SAX parser, on the other hand, does not generate a data representation of the XML content, so there is some more programming required, compared to the DOM parser. However, if demanded by the application, the SAX parser enables stream-processing and partial processing of XML sources, which both cannot be done by the DOM parser.

As a rule of thumb, which parser class to use, the following can be checked:
1. Whenever you need stream-processing or partial processing of XML files, you need the SAX parser.

2. Whenever you need a complete representation of the XML content, you should prefer the DOM parser.

3.Still no decision? Then try the DOM parser first, since it is more convenient than the SAX parser.

XPath

XPath is a language for addressing parts of an XML document.XPath (XML Path Language) is a language for selecting nodes from an XML document. In addition, XPath may be used to compute values (strings, numbers, or boolean values) from the content of an XML document.

XPath is a language for finding information in an XML document. XPath is used to navigate through elements and attributes in an XML document. XPath is a major element in the W3C's XSLT standard - and XQuery and XPointer are both built on XPath expressions. So an understanding of XPath is fundamental to a lot of advanced XML usage.

The XPath language is based on a tree representation of the XML document, and provides the ability to navigate around the tree, selecting nodes by a variety of criteria. In popular use (though not in the official specification), an XPath expression is often referred to simply as an XPath.

XQuery

The best way to explain XQuery is to say that XQuery is to XML what SQL is to database tables. XQuery was designed to query XML data.XML is a versatile markup language, capable of labeling the information content of diverse data sources including structured and semi-structured documents, relational databases, and object repositories.

A query language that uses the structure of XML intelligently can express queries across all these kinds of data, whether physically stored in XML or viewed as XML via middleware. This specification describes a query language called XQuery, which is designed to be broadly applicable across many types of XML data sources.

XQuery provides the means to extract and manipulate data from XML documents or any data source that can be viewed as XML, such as relational databases or office documents.XQuery uses XPath expression syntax to address specific parts of an XML document. It supplements this with a SQL-like "FLWOR expression" for performing joins. A FLWOR expression is constructed from the five clauses after which it is named: FOR, LET, WHERE, ORDER BY, RETURN.

XSLT

XSLT stands for XSL Transformations. XSLT is a language for transforming XML documents into other XML documents. XSLT is designed for use as part of XSL, which is a stylesheet language for XML. In addition to XSLT, XSL includes an XML vocabulary for specifying formatting. XSL specifies the styling of an XML document by using XSLT to describe how the document is transformed into another XML document that uses the formatting vocabulary.

XSLT is also designed to be used independently of XSL. However, XSLT is not intended as a completely general-purpose XML transformation language. Rather it is designed primarily for the kinds of transformations that are needed when XSLT is used as part of XSL.

Extensible Stylesheet Language Transformations (XSLT) is an XML-based language used for the transformation of XML documents into other XML or "human-readable" documents. The original document is not changed; rather, a new document is created based on the content of an existing one. The new document may be serialized (output) by the processor in standard XML syntax or in another format, such as HTML or plain text. XSLT is most often used to convert data between different XML Schemas or to convert XML data into HTML or XHTML documents for web pages, creating a dynamic web page, or into an intermediate XML format that can be converted to PDF documents.

XSLT vs XQuery : XQuery is for query, XSLT is for transformation.

XSLT & XQuery has lot's of things in common ... let's focus on differences.

Firstly, in functionality alone, there is no doubt that XSLT 2.0 wins over XQuery 1.0. There are many jobs that XSLT 2.0 can do easily that are really difficult in XQuery 1.0. Many of these fall into the categories of up-conversion applications or rendition applications, but there are plenty of others. The example given earlier, of a stylesheet/query that copies a document except for the NOTE attributes, illustrates the point.

Secondly, it's probably true at present that XSLT is better at manipulating documents, and XQuery is better at manipulating data. Both languages should be able to do both jobs, but they seem to be better at some aspects of the job than others.

The extra verbosity of XSLT (which still applies although to a lesser extent with XSLT 2.0) is probably most noticeable with very simple queries ("count how many employees will retire this month"). I find myself increasingly using XQuery for such one-liners in preference to XSLT. This applies whether it's an ad-hoc throwaway query, or something built into a Java application. In many such cases, in fact, all one needs is an XPath expression, and of course XPath is a pure subset of XQuery.

If you are building XML databases, whether "native" XML databases or XML-over-relational databases, XQuery is certainly the language of choice. If you are transforming XML documents in filestore or in memory, I think it's much harder to justify preferring XQuery over XSLT at this stage of the game. In a year's time, perhaps there will be more data to justify making this choice especially for data-oriented applications, but my feeling is that anyone who does so today is probably attaching rather too much weight to subjective criteria.

I would actually encourage any serious XML developer to have both tools in their kitbag. Once you have learnt one, it's easy enough to learn the other. I think that with time, there will be a good level of interoperation between XSLT and XQuery products, so using one language for one task doesn't get in the way of using the other language for another part of the same application. XQuery clearly wins for the database access, XSLT for the presentation side of the application; there are other bits, such as the business logic, where in many cases either language will do the job and it becomes a matter of personal preference.

Links:

http://www.xml.com/

http://www.xml.com/pub/a/2001/02/14/perlsax.html

http://www.onjava.com/pub/a/onjava/2002/06/26/xml.html

http://www.devx.com/xml/Article/16922 http://www.idealliance.org/proceedings/xtech05/papers/02-03-01/

XML - http://www.w3schools.com/

XML Schema - http://www.w3schools.com/Schema/schema_schema.asp

XSD Facets (Restricting the range of values for data_type -http://www.w3schools.com/Schema/schema_facets.asp

2.XML Data Binding (C++,Java)

XML Data Binding provides a simple and direct way to use XML in your applications. With data binding your application can largely ignore the actual structure of XML documents, instead working directly with the data content of those documents. This isn't suitable for all applications, but it is ideal for the common case of applications that use XML for data exchange.

Data binding can also provide other benefits beyond programming simplicity. Since it abstracts away many of the document details, data binding usually requires less memory than a document model approach (such as DOM or JDOM) for working with documents in memory. You'll also find that the data binding approach gives you faster access to data within your program than you would get with a document model, since you don't need to go through the structure of the document to get at the data. Finally, special types of data such as numbers and dates can be converted to internal representations on input, rather than being left as text; this allows your application to work with the data values much more efficiently.

You might be wondering, if data binding is such great stuff, when would you want to use a document model approach instead? Basically there are two main cases:
When your application is really concerned with the details of the document structure. If you're writing an XML document editor, for instance, you'll want to stick to a document model rather than using data binding. When the documents that you're processing don't necessarily follow fixed structures. For example, data binding wouldn't be a good approach for implementing a general XML document database.

Data binding dictionary
Grammar is a set of rules defining the structure of a family of XML documents. One type of grammar is the Document Type Definition (DTD) format defined by the XML specification. Another increasingly common type is the W3C XML Schema (Schema) format defined by the XML Schema specification. Grammars define which elements and attributes can be present in a document, and how elements can be nested within the document (often including the order and number of nested elements). Some types of grammars (such as Schema) also go much further, allowing specific data types and even regular expressions to be matched by character data content. In this article I'll often use the term description as an informal way to refer to the grammar for a family of documents.

Marshalling is the process of generating an XML representation for an object in memory. As with Java object serialization, the representation needs to include all dependent objects: objects referenced by our main object, objects referenced by those objects, and so on.

Unmarshalling is the reverse process of marshalling, building an object (and potentially a graph of linked objects) in memory from an XML representation.

Links:

XML-JAVA Bindings
XML- JAVA Data BindingXML Data Binding - http://www-128.ibm.com/developerworks/library/x-databdopt/
XML Data Binding with Castor -
http://www.onjava.com/pub/a/onjava/2001/10/24/xmldatabind.html
XML-Java Data Binding Using XMLBeans -
http://www.onjava.com/pub/a/onjava/2004/07/28/XMLBeans.html
What tool for xml binding -
http://www.theserverside.com/news/thread.tss?thread_id=30658

XML-C++ Bindings
XML - C++ Data BindingXML Data binding with gSoap -
http://www.genivia.com/Products/gsoap/features.html
XML Data binding with LMX -
http://tech-know-ware.com/lmx/

Labels: XML

Design Principles : Favor object composition over class inheritance


Object composition and inheritance are two techniques for reusing functionality in object-oriented systems.

Class inheritance, or subclassing, allows a subclass' implementation to be defined in terms of the parent class' implementation. This type of reuse is often called white-box reuse. This term refers to the fact that with inheritance, the parent class implementation is often visible to the subclasses.

Object composition is a different method of reusing functionality. Objects are composed to achieve more complex functionality. This approach requires that the objects have well-defined interfaces since the internals of the objects are unknown. Because objects are treated only as "black boxes," this type of reuse is often called black-box reuse.

Comparing composition and inheritance So how exactly do composition and inheritance compare?

Here are several points of comparison:

  • It is easier to change the interface of a back-end class (composition) than a superclass (inheritance). A change to the interface of a back-end class necessitates a change to the front-end class implementation, but not necessarily the front-end interface. Code that depends only on the front-end interface still works, so long as the front-end interface remains the same. By contrast, a change to a superclass's interface can not only ripple down the inheritance hierarchy to subclasses, but can also ripple out to code that uses just the subclass's interface.

  • It is easier to change the interface of a front-end class (composition) than a subclass(inheritance). Just as superclasses can be fragile, subclasses can be rigid. You can't just change a subclass's interface without making sure the subclass's new interface is compatible with that of its supertypes. For example, you can't add to a subclass a method with the same signature but a different return type as a method inherited from a superclass. Composition, on the other hand, allows you to change the interface of a front-end class without affecting back-end classes.
  • Composition allows you to delay the creation of back-end objects until (and unless) they are needed, as well as changing the back-end objects dynamically throughout the lifetime of the front-end object. With inheritance, you get the image of the superclass in your subclass object image as soon as the subclass is created, and it remains part of the subclass object throughout the lifetime of the subclass.
  • It is easier to add new subclasses (inheritance) than it is to add new front-end classes (composition), because inheritance comes with polymorphism. If you have a bit of code that relies only on a superclass interface, that code can work with a new subclass without change. This is not true of composition, unless you use composition with interfaces. Used together, composition and interfaces make a very powerful design tool.

  • The explicit method-invocation forwarding (or delegation) approach of composition will often have a performance cost as compared to inheritance's single invocation of an inherited superclass method implementation. I say "often" here because the performance really depends on many factors, including how the JVM optimizes the program as it executes it.
  • With both composition and inheritance, changing the implementation (not the interface) of any class is easy. The ripple effect of implementation changes remain inside the same class.

Links:

http://www.javaworld.com/javaworld/jw-11-1998/jw-11-techniques.html http://brighton.ncsa.uiuc.edu/~prajlich/T/node14.html

Labels: SOFTWARE DESIGN

Relational Database Concepts

ACID (Atomicity, Consistency, Isolation, Durability) Properties

In computer science, ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction.

Atomicity
Atomicity refers to the ability of the DBMS to guarantee that either all of the tasks of a transaction are performed or none of them are. For example, the transfer of funds can be completed or it can fail for a multitude of reasons, but atomicity guarantees that one account won't be debited if the other is not credited.


Consistency
Consistency property ensures that the database remains in a consistent state before the start of the transaction and after the transaction is over (whether successful or not).

Isolation
Isolation refers to the ability of the application to make operations in a transaction appear isolated from all other operations. This means that no operation outside the transaction can ever see the data in an intermediate state; for example, a bank manager can see the transferred funds on one account or the other, but never on both—even if he ran his query while the transfer was still being processed. More formally, isolation means the transaction history (or schedule) is serializable. This ability is the constraint which is most frequently relaxed for performance reasons.

Durability
Durability refers to the guarantee that once the user has been notified of success, the transaction will persist, and not be undone. This means it will survive system failure, and that the database system has checked the integrity constraints and won't need to abort the transaction. Many databases implement durability by writing all transactions into a log that can be played back to recreate the system state right before the failure. A transaction can only be deemed committed after it is safely in the log.


Need-to-Know for the Database Developer

The Rules of the Game Codd's Twelve Rules.

Many references to the twelve rules include a thirteenth rule - or rule zero:

A relational database management system (DBMS) must manage its stored data using only its relational capabilities.This is basically a corollary or companion requirement to rule #4

1.Information RuleAll information in the database should be represented in one and only one way -- as values in a table.

2.Guaranteed Access RuleEach and every datum (atomic value) is guaranteed to be logically accessible by resorting to a combination of table name, primary key value, and column name.

3.Systematic Treatment of Null ValuesNull values (distinct from empty character string or a string of blank characters and distinct from zero or any other number) are supported in the fully relational DBMS for representing missing information in a systematic way, independent of data type.

4.Dynamic Online Catalog Based on the Relational ModelThe database description is represented at the logical level in the same way as ordinary data, so authorized users can apply the same relational language to its interrogation as they apply to regular data.

5.Comprehensive Data Sublanguage RuleA relational system may support several languages and various modes of terminal use. However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and whose ability to support all of the following is comprehensible:
a. data definition
b. view definition
c. data manipulation (interactive and by program)
d. integrity constraints
e. authorization
f. transaction boundaries (begin, commit, and rollback).

6.View Updating RuleAll views that are theoretically updateable are also updateable by the system.

7.High-Level Insert, Update, and DeleteThe capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data, but also to the insertion, update, and deletion of data.

8.Physical Data IndependenceApplication programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representation or access methods.

9.Logical Data IndependenceApplication programs and terminal activities remain logically unimpaired when information preserving changes of any kind that theoretically permit unimpairment are made to the base tables.

10.Integrity IndependenceIntegrity constraints specific to a particular relational database must be definable in the relational data sublanguage and storable in the catalog, not in the application programs.

11.Distribution IndependenceThe data manipulation sublanguage of a relational DBMS must enable application programs and terminal activities to remain logically unimpaired whether and whenever data are physically centralized or distributed.

12.Nonsubversion RuleIf a relational system has or supports a low-level (single-record-at-a-time) language, that low-level language cannot be used to subvert or bypass the integrity rules or constraints expressed in the higher-level (multiple-records-at-a-time) relational language.

Relational Database Normalization

The concept of database normalization is not unique to any particular Relational Database Management System. It can be applied to any of several implications of relational databases including Microsoft Access, dBase, Oracle, etc.

The benefits of normalizing your database include:
1.Avoiding repetitive entries
2.Reducing required storage space
3.Preventing the need to restructure existing tables to accommodate new data
4.Increased speed and flexibility of queries, sorts, and summaries

There are 5 normal forms in all, each progressively building on its predecessor. In order to reach peak efficiency, it is recommended that relational databases be normalized through at least the third normal form. In order to normalize a database, each table should have a primary key field that uniquely identifies each record in that table. A primary key can consist of a single field (an ID Number field for instance) or a combination of two or more fields that together make a unique key (called a multiple field primary key).

1NF
The first normal form (or 1NF) requires that the values in each column of a table are atomic. By atomic we mean that there are no sets of values within a column.
One method for bringing a table into first normal form is to separate the entities contained in the table into separate tables. In our case this would result in Book, Author, Subject and Publisher tables.

2NF
The second normal form (or 2NF) any non-key columns must depend on the entire primary key. In the case of a composite primary key, this means that a non-key column cannot depend on only part of the composite key.

3NF
Third Normal Form (3NF) requires that all columns depend directly on the primary key. Tables violate the Third Normal Form when one column depends on another column, which in turn depends on the primary key (a transitive dependency).

2PL (2-Phase Locking) vs 2PC (2-Phase Commit)

2PC and 2PL are protocols used in conjunction with distributed databases.

The two phase lock protocol (2PL) deals uniquely with the fact how locks are are acquired during a transaction whereas the two phase commit (2PC) protocol deals with the fact how multiple hosts decide whether one specific transaction is written (committed) or not (abort).

2PL says that first there is a phase where locks are (during a transaction) acquired (growth phase) and then there is a phase where the locks are being removed (shrinking phase). Once the shrinking phase started no more locks can be acquired during this transaction. The shrinking phase usually takes place after an abort or a commit phase in a typical database system.

The essence of 2PC is that after a transaction is complete and should be committed a vote starts. Each node which is part of the transaction is asked to "prepare to commit". The node will then check whether a local commit is possible and if yes it votes with "ready to commit" (RTC) [Important: changes are not being written to the database at that point]. Once a node signaled RTC the system must be kept in a state where the transaction is always committable. If all nodes signal RTC the transaction the transaction master signals them a commit. If one of the nodes does not signal RTC the transaction master will signal abort to all local transactions.

If all transactions follow 2PL principal, their interleaved execution is always serializable But 2PC does not guarantee that the execution would be deadlock free

Deadlock: two (or more) transactions, each of them waiting for a resource held by the other.

Deadlock Prevention Algorithms: Wait-Die, Wound-Wait


Links
http://databases.about.com/od/specificproducts/a/acid.htm

http://www.15seconds.com/issue/020522.htm

http://www.ianywhere.com/developer/product_manuals/sqlanywhere/0901/en/html/dbugen9/00000159.htm

http://www.devhood.com/tutorials/tutorial_details.aspx?tutorial_id=95

http://www.bkent.net/Doc/simple5.htm

http://www.serverwatch.com/tutorials/article.php/10825_1549781_3

http://dev.mysql.com/tech-resources/articles/intro-to-normalization.html

http://www.databasejournal.com/sqletc/article.php/1428511

http://www.bkent.net/Doc/simple5.htm http://www.acm.org/classics/nov95/toc.html

http://en.wikipedia.org/wiki/Database_normalization

Database Knowledgebase : http://database.ittoolbox.com/

All about SqlServer - http://www.sqlservercentral.com/

Labels: DATABASE

64bit Windows Operating System Overview


The performance benefits seen from the x64 OS have been substantial. Because of the larger virtual memory space available to the processes. There are 232 possible combinations that a 32-bit address can take (meaning 4GB worth of addressable space). On the x86 operating system, half of that is allocated for the kernel, and the other half is given to user mode processes. Since the addressing spaces of the user mode processes are independent of each other, each process can reference up to 2GB of memory. With a 64-bit addressing space, there is a possibility of 264 unique address combinations (16 exabytes). The x64 OS currently uses 43 bits for addressing, giving the kernel 8TB of addressable memory, and leaving the other 8TB for user mode processes.

WOW64 Layer

WOW64 is the x86 emulator that allows 32-bit Windows-based applications to run on 64-bit Windows.WOW64 launches and runs 32-bit applications seamlessly. The system isolates 32-bit applications from 64-bit applications, which includes preventing file and registry collisions. Console, GUI, and service applications are supported. However, 32-bit processes cannot load 64-bit DLLs, and 64-bit processes cannot load 32-bit DLLs.

Restrictions of the WOW64 subsystem
The WOW64 subsystem does not support the following programs:
• Programs that are compiled for 16-bit operating systems
• Kernel-mode programs that are compiled for 32-bit operating systems

16-bit programs
The x64-based versions of Windows Server 2003 and of Windows XP Professional x64 Edition do not support 16-bit programs or 16-bit program components. The software emulation that is required to run 16-bit programs on the x64-based version of Windows Server 2003 or of Windows XP Professional x64 Edition would significantly decrease the performance of those programs.

If a 32-bit program that requires 16-bit components tries to run a 16-bit file or component, the 32-bit program will log an error message in the System log. The operating system will then let the 32-bit program handle the error.

If there are any 32bit Applications that make use of 16bit binaries (.exe/.dll) and these 16bit binaries (.exe/.dll) need to be copied to the Win64 machine during Migration Process then these applications may stop working on the Win64 machine.

32-bit drivers
The x64-based versions of Windows Server 2003 and of Windows XP Professional x64 Edition do not support 32-bit drivers. All hardware device drivers and program drivers must be compiled specifically for the x64-based version of Windows Server 2003 and of Windows XP Professional x64 Edition.

If a 32-bit program tries to register a 32-bit driver for automatic startup on a computer that is running an x64-based version of Windows Server 2003 or of Windows XP Professional x64 Edition, the bootstrap loader on the computer recognizes that the 32-bit driver is not supported. The x64-based version of Windows Server 2003 or of Windows XP Professional x64 Edition does not start the 32-bit driver, but does start the other registered drivers.

If there are any 64bit Applications that make use of 32/16bit Device Drivers and these 16bit binaries (.exe/.dll) need to be copied to the Win64 machine during migration Process then these applications may stop working on the Win64 machine.

File System Redirection
When a Windows API function to detect the operating system on which you are running, the File System Redirection function maps your 32- or 64-bit files to the appropriate system directory. Any time a 32-bit process attempts to access c:\windows\system32 the WoW64 layer redirects it into c:\windows\syswow64 which contains all of the 32-bit Windows binaries. This prevents a 32-bit process from trying to load a 64-bit binary. Any scripts or tools running in a 32-bit process that is referencing this directory will be automatically redirected to the syswow64 directory.

A new C:\Program Files directory is introduced with 64-bit Windows. The familiar Program Files directory is now reserved for native 64-bit applications. Your 32-bit applications are directed to the new C:\Program Files (x86) directory.

Registry Redirection
Any 32-bit process trying to read or write to HKEY_LOCAL_MACHINE\Software\ gets redirected to HKEY_LOCAL_MACHINE\Software\Wow6432Node\. This allows separate configurations to be maintained for 32-bit and 64-bit processes. Any custom settings or keys set in this node may need to exist in both keys, as 32-bit processes will be redirected to this new branch.


Links
http://msdn.microsoft.com/msdnmag/issues/06/05/x64/default.aspx http://www.acucorp.com/company/newsletter/newsletter_featured/featured_6.php http://msdn.microsoft.com/chats/transcripts/windows/windows_110904b.aspx

Labels: PORTING

C/C++ : Guidelines - Porting to 64bit Operating Systems

Guidelines to remember while porting C/C++ Applications to 64-bit mode

• Data Truncation
• Avoid assigning longs to ints
• Avoid Storing Pointers in ints
• Avoid Truncating Function Return Values
• Use Appropriate Print Specifiers

• Data Type Promotion
• Avoid Arithmetic between Signed and Unsigned Numbers

• Pointers
• Avoid Pointer Arithmetic between longs and ints
• Avoid Casting Pointers to ints or ints to Pointers
• Avoid Storing Pointers in ints
• Avoid Truncating Function Return Values

• Structures
• Avoid Using Unnamed and Unqualified Bit Fields
• Avoid Passing Invalid Structure References

• Hardcoded Constants
• Avoid Using Literals and Masks that Assume 32 bits
• Avoid Hardcoding Size of Data Types
• Avoid Hardcoding Bit Shift Values
• Avoid Hardcoding Constants with malloc(), memory(3), string(3)


Ref:
Migrating to 64-Bit Environments - http://www.informit.com/guides/printerfriendly.asp?g=cplusplus&seqNum=201

Porting to a 64-bit Platform - http://www.devx.com/Intel/Article/27237/2217?pf=true

HP-UX 64-bit Porting Concepts - http://docs.hp.com/en/5966-9844/ch03.html

HP-UX 64-bit Porting and Transition Guide: HP 9000 Computers - http://docs.hp.com/en/5966-9887/

Target 32- and 64-bit Platforms Together with a Few Simple Datatype Changes -http://www.devx.com/cplus/Article/27510/1954?pf=true

Labels: PORTING
Newer Posts Home
Subscribe to: Posts (Atom)

The Verge - YOUTUBE

Loading...

Google - YOUTUBE

Loading...

Microsoft - YOUTUBE

Loading...

MIT OpenCourseWare - YOUTUBE

Loading...

FREE CODE CAMP - YOUTUBE

Loading...

NEET CODE - YOUTUBE

Loading...

GAURAV SEN INTERVIEWS - YOUTUBE

Loading...

Y Combinator Discussions

Loading...

SUCCESS IN TECH INTERVIEWS - YOUTUBE

Loading...

IGotAnOffer: Engineering YOUTUBE

Loading...

Tanay Pratap YOUTUBE

Loading...

Ashish Pratap Singh YOUTUBE

Loading...

Questpond YOUTUBE

Loading...

Kantan Coding YOUTUBE

Loading...

CYBER SECURITY - YOUTUBE

Loading...

CYBER SECURITY FUNDAMENTALS PROF MESSER - YOUTUBE

Loading...

DEEPLEARNING AI - YOUTUBE

Loading...

STANFORD UNIVERSITY - YOUTUBE

Loading...

NPTEL IISC BANGALORE - YOUTUBE

Loading...

NPTEL IIT MADRAS - YOUTUBE

Loading...

NPTEL HYDERABAD - YOUTUBE

Loading...

MIT News

Loading...

MIT News - Artificial intelligence

Loading...

The Berkeley Artificial Intelligence Research Blog

Loading...

Microsoft Research

Loading...

MachineLearningMastery.com

Loading...

Harward Business Review(HBR)

Loading...

Wharton Magazine

Loading...
My photo
Krishna Kishore Koney
View my complete profile
" It is not the strongest of the species that survives nor the most intelligent that survives, It is the one that is the most adaptable to change "

View krishna kishore koney's profile on LinkedIn

Monthly Blog Archives

  • ►  2025 (2)
    • ►  May (1)
    • ►  April (1)
  • ►  2024 (18)
    • ►  December (1)
    • ►  October (2)
    • ►  September (5)
    • ►  August (10)
  • ►  2022 (2)
    • ►  December (2)
  • ►  2021 (2)
    • ►  April (2)
  • ►  2020 (17)
    • ►  November (1)
    • ►  September (7)
    • ►  August (1)
    • ►  June (8)
  • ►  2019 (18)
    • ►  December (1)
    • ►  November (2)
    • ►  September (3)
    • ►  May (8)
    • ►  February (1)
    • ►  January (3)
  • ►  2018 (3)
    • ►  November (1)
    • ►  October (1)
    • ►  January (1)
  • ►  2017 (2)
    • ►  November (1)
    • ►  March (1)
  • ►  2016 (5)
    • ►  December (1)
    • ►  April (3)
    • ►  February (1)
  • ►  2015 (15)
    • ►  December (1)
    • ►  October (1)
    • ►  August (2)
    • ►  July (4)
    • ►  June (2)
    • ►  May (3)
    • ►  January (2)
  • ►  2014 (13)
    • ►  December (1)
    • ►  November (2)
    • ►  October (4)
    • ►  August (5)
    • ►  January (1)
  • ►  2013 (5)
    • ►  September (2)
    • ►  May (1)
    • ►  February (1)
    • ►  January (1)
  • ►  2012 (19)
    • ►  November (1)
    • ►  October (2)
    • ►  September (1)
    • ►  July (1)
    • ►  June (6)
    • ►  May (1)
    • ►  April (2)
    • ►  February (3)
    • ►  January (2)
  • ►  2011 (20)
    • ►  December (5)
    • ►  August (2)
    • ►  June (6)
    • ►  May (4)
    • ►  April (2)
    • ►  January (1)
  • ►  2010 (41)
    • ►  December (2)
    • ►  November (1)
    • ►  September (5)
    • ►  August (2)
    • ►  July (1)
    • ►  June (1)
    • ►  May (8)
    • ►  April (2)
    • ►  March (3)
    • ►  February (5)
    • ►  January (11)
  • ►  2009 (113)
    • ►  December (2)
    • ►  November (5)
    • ►  October (11)
    • ►  September (1)
    • ►  August (14)
    • ►  July (5)
    • ►  June (10)
    • ►  May (4)
    • ►  April (7)
    • ►  March (11)
    • ►  February (15)
    • ►  January (28)
  • ►  2008 (61)
    • ►  December (7)
    • ►  September (6)
    • ►  August (1)
    • ►  July (17)
    • ►  June (6)
    • ►  May (24)
  • ▼  2006 (7)
    • ▼  October (7)
      • Effective Code Reviews ...
      • Getting Started with PORTING to Mac OS X ...
      • XML Overview
      • Design Principles : Favor object composition over ...
      • Relational Database Concepts
      • 64bit Windows Operating System Overview
      • C/C++ : Guidelines - Porting to 64bit Operating Sy...

Blog Archives Categories

  • .NET DEVELOPMENT (38)
  • 5G (5)
  • AI (Artificial Intelligence) (9)
  • AI/ML (4)
  • ANDROID DEVELOPMENT (7)
  • BIG DATA ANALYTICS (6)
  • C PROGRAMMING (7)
  • C++ PROGRAMMING (24)
  • CAREER MANAGEMENT (6)
  • CHROME DEVELOPMENT (2)
  • CLOUD COMPUTING (45)
  • CODE REVIEWS (3)
  • CYBERSECURITY (12)
  • DATA SCIENCE (4)
  • DATABASE (14)
  • DESIGN PATTERNS (9)
  • DEVICE DRIVERS (5)
  • DOMAIN KNOWLEDGE (14)
  • EDGE COMPUTING (4)
  • EMBEDDED SYSTEMS (9)
  • ENTERPRISE ARCHITECTURE (10)
  • IMAGE PROCESSING (3)
  • INTERNET OF THINGS (2)
  • J2EE PROGRAMMING (10)
  • KERNEL DEVELOPMENT (6)
  • KUBERNETES (19)
  • LATEST TECHNOLOGY (18)
  • LINUX (9)
  • MAC OPERATING SYSTEM (2)
  • MOBILE APPLICATION DEVELOPMENT (14)
  • PORTING (4)
  • PYTHON PROGRAMMING (6)
  • RESEARCH AND DEVELOPMENT (1)
  • SCRIPTING LANGUAGES (8)
  • SERVICE ORIENTED ARCHITECTURE (SOA) (10)
  • SOFTWARE DESIGN (13)
  • SOFTWARE QUALITY (5)
  • SOFTWARE SECURITY (23)
  • SYSTEM and NETWORK ADMINISTRATION (3)
  • SYSTEM PROGRAMMING (4)
  • TECHNICAL MISCELLANEOUS (31)
  • TECHNOLOGY INTEGRATION (5)
  • TEST AUTOMATION (5)
  • UNIX OPERATING SYSTEM (4)
  • VC++ PROGRAMMING (44)
  • VIRTUALIZATION (8)
  • WEB PROGRAMMING (8)
  • WINDOWS OPERATING SYSTEM (13)
  • WIRELESS DEVELOPMENT (5)
  • XML (3)

Popular Posts

  • Observer Pattern - Push vs Pull Model
  • AI Agent vs AI Workflow
  • Microservices Architecture ..
  • SSCLI(Shared Source Common Language Infrastructure)

My Other Blogs

  • Career Management: Invest in Yourself
  • Color your Career
  • Attitude is everything(in Telugu language)
WINNING vs LOSING

Hanging on, persevering, WINNING
Letting go, giving up easily, LOSING

Accepting responsibility for your actions, WINNING
Always having an excuse for your actions, LOSING

Taking the initiative, WINNING
Waiting to be told what to do, LOSING

Knowing what you want and setting goals to achieve it, WINNING
Wishing for things, but taking no action, LOSING

Seeing the big picture, and setting your goals accordingly, WINNING
Seeing only where you are today, LOSING

Being determined, unwilling to give up WINNING
Gives up easily, LOSING

Having focus, staying on track, WINNING
Allowing minor distractions to side track them, LOSING

Having a positive attitude, WINNING
having a "poor me" attitude, LOSING

Adopt a WINNING attitude!

Total Pageviews

who am i

My photo
Krishna Kishore Koney

Blogging is about ideas, self-discovery, and growth. This is a small effort to grow outside my comfort zone.

Most important , A Special Thanks to my parents(Sri Ramachandra Rao & Srimathi Nagamani), my wife(Roja), my lovely daughter (Hansini) and son (Harshil) for their inspiration and continuous support in developing this Blog.

... "Things will never be the same again. An old dream is dead and a new one is being born, as a flower that pushes through the solid earth. A new vision is coming into being and a greater consciousness is being unfolded" ... from Jiddu Krishnamurti's Teachings.

Now on disclaimer :
1. Please note that my blog posts reflect my perception of the subject matter and do not reflect the perception of my Employer.

2. Most of the times the content of the blog post is aggregated from Internet articles and other blogs which inspired me. Due respect is given by mentioning the referenced URLs below each post.

Have a great time

My LinkedIn Profile
View my complete profile

Failure is not falling down, it is not getting up again. Success is the ability to go from failure to failure without losing your enthusiasm.

Where there's a Will, there's a Way. Keep on doing what fear you, that is the quickest and surest way to to conquer it.

Vision is the art of seeing what is invisible to others. For success, attitude is equally as important as ability.

Favourite RSS Syndications ...

Google Developers Blog

Loading...

Blogs@Google

Loading...

Berklee Blogs » Technology

Loading...

Martin Fowler's Bliki

Loading...

TED Blog

Loading...

TEDTalks (video)

Loading...

Psychology Today Blogs

Loading...

Aryaka Insights

Loading...

The Pragmatic Engineer

Loading...

Stanford Online

Loading...

MIT Corporate Relations

Loading...

AI at Wharton

Loading...

OpenAI

Loading...

AI Workshop

Loading...

Hugging Face - Blog

Loading...

BYTE BYTE GO - YOUTBUE

Loading...

Google Cloud Tech

Loading...

3Blue1Brown

Loading...

Bloomberg Originals

Loading...

Dwarkesh Patel Youtube Channel

Loading...

Reid Hoffman

Loading...

Aswath Damodaran

Loading...