Function Adapters
A function adapter is a function object that enables the combining of function objects with each other, with certain values, or with special function.
The expression bind2nd(greater
The following are the list of predefined function adapter classes in STL:
* bind1st(op, value) ==> op(value, param)
* bind2nd(op, value) ==> op(param, value)
* not1(op) ==> !op(param)
* not2(op) ==> !op(param1, param2)
Search this Blog:
STL (Standard Template Library) Part2
C++ Programming Tips
C++ inheritance is very similar to a parent-childrelationship. When a class is inherited all the functions and data member are inherited, although not all of them will be accessible by the member functions of the derived class. But there are some exceptions to it too. Some of the exceptions to be noted in C++ inheritance are as follows.
1. The constructor of a base class are not inherited
2. The destructor of a base class are not inherited
3. The assignment operator is not inherited
4. Friend functions and friend classes of the base class are also not inherited.
There are some points to be remembered about C++ inheritance. The protected and public variables or members of the base class are all accessible in the derived class. But a private member variable not accessible by a derived class.
3. Private Inheritance:
The key difference is that whereas public inheritence provides a common interface between two classes, private inheritance does not--rather, it makes all of the public functions of the parent class private in the child class. This means that they can be used in order to implement the child class without being accessible to the outside world.
The syntax for private inheritance is almost exactly the same as for public inheritance.
class obj : private implementationDetailOfObj.
4. UTF-8 & C++
The C++ strcoll function compares two strings according to the LC_COLLATE category, which provides specific collating information. This function may fail if either string contains characters outside the domain of the current collating sequence. It is multi-thread safe as long no other thread calls setlocale() while this function is executing. The following steps will help you use the strcoll function in C++.
iconv
The iconv API is the standard programming interface for converting character stringsfrom one character encoding to another in Unix-like operating systems. Initially appearing on the HP-UX operating system, it was standardized within XPG4 and is part of the Single UNIX Specification (SUS).
All recent Linux distributions contain a free implementation of iconv() as part of the GNU C Librarywhich is the C library for current Linux systems. To use it, the GNU glibc localesneed to be installed, which is provided as a separate package, named glibc-locale usually, and is normally installed by default
Mbstowcs
Wcstombs
Mblen
Glib
For many applications, C with GLib is an alternative to C++ with STL (see GObject for a detailed comparison).
In computing, GLibrefers to a cross-platform software utility library. It started life as part of the GTK+project, however, before releasing version 2 of GTK+, the project's developers decided to separate non-GUI-specific code from the GTK+ platform, thus creating GLib as a separate product. GLib was released as a separate library so other developers, those that did not make use of the GUI-related portions of GTK+, could make use of the non-GUI portions of the library without the overhead of depending on a full-blown GUI library.
Since GLib is a cross-platform library, applications using it to interface with the operating system are usually portable across different operating systems without major changes.
ICU
The International Component for Unicode (ICU) is a mature, portable set of C/C++ and Java libraries for Unicode support, software internationalization (I18N) and globalization (G11N), giving applications the same results on all platforms.
Microsoft Coding Standard Rules - http://msdn.microsoft.com/en-us/library/czefa0ke.aspx
The Complete Guide to C++ Strings, Part II - String Wrapper Classes : http://www.codeproject.com/KB/string/cppstringguide2.aspx?fid=11477&df=90&mpp=25&noise=3&sort=Position&view=Quick&fr=126&select=308883
C++ Strings - http://richardbowles.tripod.com/cpp/cpp15.htm
C++ JAVA Interoperability
In-Process Integration
JNI
Out-Process Integration
WebServices
ESB(Mule/OpenESB/ ...)
EAI(ex: Tibco/Corba/...)
RPC
Sockets
Links:
Java C++ Socket Communication Class Code - http://www.keithv.com/project/socket.html
Endianess : Java vs C++ - http://mindprod.com/jgloss/endian.html
Experiences Converting a C++ Communication Software Framework to Java By Prashant Jain and Douglas C. Schmidt - http://www.cs.wustl.edu/~schmidt/C++2java.html
C++/JAVA communication -
http://adtmag.com/reports/article.aspx?editorialsid=618
C++, Java, & XML Processing - http://www.ddj.com/java/184401817
Unix Miscellaneous
1. ipcs provides information on the ipc facilities for which the calling process has read acccess.
The -i option allows a specific resource id to be specified. Only information on this id will be printed.
Options:
-m : shared memory segments
-q : message queues
-s : semaphore arrays
-a : all (this is the default)
2. ipcrm - remove a message queue, semaphore set or shared memory id
Options:
-M shmkey : removes the shared memory segment created with shmkey after the last detach is performed.
-m shmid : removes the shared memory segment identified by shmid after the last detach is performed.
-Q msgkey : removes the message queue created with msgkey.
-q msgid : removes the message queue identified by msgid.
-S semkey : removes the semaphore created with semkey.
-s semid : removes the semaphore identified by semid.
3. swab - swap adjacent bytes (Useful during Porting on Unix Platforms)
#include <unistd.h>
void swab(const void *from, void *to, ssize_t n);
Description:
The swab() function copies n bytes from the array pointed to by from to the array pointed to by to, exchanging adjacent even and odd bytes. This function is used to exchange data between machines that have different low/high byte ordering.
This function does nothing when n is negative. When n is positive and odd, it handles n-1 bytes as above, and does something unspecified with the last byte. (In other words, n should be even.)
4. lsof - A utility which lists open files on a Linux/UNIX system.
glsof - GUI for lsof.
A command meaning "list open files", which is used in many Unix-like systems to report a list of all open files and the processes that opened them.
Open files in the system include disk files, pipes, network sockets and devices opened by all processes. One use for this command is when a disk cannot be unmounted because (unspecified) files are in use. The listing of open files can be consulted (suitably filtered if necessary) to identify the process that is using the files.
5. netstat (network statistics) - A command-line tool that displays network connections (both incoming and outgoing), routing tables, and a number of network interface statistics. It is available on Unix, Unix-like, and Windows NT-based operating systems.
Parameters used with this command must be prefixed with a hyphen (-) rather than a slash (/).
-a : Displays all active TCP connections and the TCP and UDP ports on which the computer is listening.
-b : Displays the binary (executable) program's name involved in creating each connection or listening port. (Windows only)
-e : Displays ethernet statistics, such as the number of bytes and packets sent and received. This parameter can be combined with -s.
-f : Displays fully qualified domain names
-i : Displays network interfaces and their statistics (not available under Windows)
-n : Displays active TCP connections, however, addresses and port numbers are expressed numerically and no attempt is made to determine names.
-o : Displays active TCP connections and includes the process ID (PID) for each connection. You can find the application based on the PID on the Processes tab in Windows Task Manager. This parameter can be combined with -a, -n, and -p. This parameter is available on Microsoft Windows XP, 2003 Server (not Microsoft Windows 2000)).
-p Windows: Protocol : Shows connections for the protocol specified by Protocol. In this case, the Protocol can be tcp, udp, tcpv6, or udpv6. If this parameter is used with -s to display statistics by protocol, Protocol can be tcp, udp, icmp, ip, tcpv6, udpv6, icmpv6, or ipv6.
-p Linux: Process : Show which processes are using which sockets (similar to -b under Windows) (you must be root to do this)
-r : Displays the contents of the IP routing table. (This is equivalent to the route print command under Windows.)
-s : Displays statistics by protocol. By default, statistics are shown for the TCP, UDP, ICMP, and IP protocols. If the IPv6 protocol for Windows XP is installed, statistics are shown for the TCP over IPv6, UDP over IPv6, ICMPv6, and IPv6 protocols. The -p parameter can be used to specify a set of protocols.
-v : When used in conjunction with -b it will display the sequence of components involved in creating the connection or listening port for all executables.
Interval : Redisplays the selected information every Interval seconds. Press CTRL+C to stop the
redisplay. If this parameter is omitted, netstat prints the selected information only once.
/? : Displays help at the command prompt. (only on Windows)
6. ptrace - process trace
#include <sys/ptrace.h>long ptrace(enum __ptrace_request request, pid_t pid, void*addr, void *data);
The ptrace() system call provides a means by which a parent process may observe and control the execution of another process, and examine and change its core image and registers. It is primarily used to implement breakpoint debugging and system call tracing.
Have you ever wondered how system calls can be intercepted? Have you ever tried fooling the kernel by changing system call arguments? Have you ever wondered how debuggers stop a running process and let you take control of the process?
If you are thinking of using complex kernel programming to accomplish tasks, think again. Linux provides an elegant mechanism to achieve all of these things: the ptrace (Process Trace) system call. ptrace provides a mechanism by which a parent process may observe and control the execution of another process. It can examine and change its core image and registers and is used primarily to implement breakpoint debugging and system call tracing.
Links:
Playing with PTrace Part1 - http://www.linuxjournal.com/article/6100
Playing with PTrace Part2 - http://www.linuxjournal.com/article/6210
7. strace - System Call Trace
Tracing the system calls of a program, we have a very good tool in strace. What is unique about strace is that, when it is run in conjunction with a program, it outputs all the calls made to the kernel by the program.
In many cases, a program may fail because it is unable to open a file or because of insufficient memory. And tracing the output of the program will clearly show the cause of either problem.
The use of strace is quite simple and takes the following form:$ strace
For example, I can run a trace on 'ls' as follows : $ strace ls
And this will output a great amount of data on to the screen. If it is hard to keep track of the scrolling mass of data, then there is an option to write the output of strace to a file instead which is done using the -o option.
For example: $ strace -o strace_ls_output.txt ls
Links
http://linuxhelp.blogspot.com/2006/05/strace-very-powerful-troubleshooting.html
http://www.redhat.com/magazine/010aug05/features/strace/
Virtual Base Class, Multiple Inheritance: Construction and Destruction Order
The construction algorithm now works as follows:
virtual bases have the highest precedence. Depth comes next, and then the order of appearance; leftmost bases are constructed first.
Order of Destructors is just reverse of Construction.
Algorithm:
1. Construct the virtual bases using the previous depth-first left-to-right order of appearance. Since there's only one virtual base B, it's constructed first.2. Construct the non-virtual bases according to the depth-first left-to-right order of appearance. The deepest base specifier list contains A. Consequently, it's constructed next.
3. Apply the same rule on the next depth level. Consequently, D is constructed.
4. Finally, D2's constructor is called.
Examples:
Example1:
class Base
{
public:
Base() { cout<<"\n Base C'ctor\n"; }
};
class Derived1 : virtual public Base
{
public:
Derived1() { cout<<"\n Derived1 C'ctor\n"; }
};
class Derived2 : virtual public Base
{
public:
Derived2() { cout<<"\n Derived2 C'ctor\n"; }
};
class Derived3 : public Derived1, public Derived2
{
public:
Derived3() { cout<<"\n Derived3 C'ctor\n"; }
};
main()
{
Derived3 d;
}
Output:
Base C'ctor
Derived1 C'ctor
Derived2 C'ctor
Derived3 C'ctor
Example2:
class Base
{
public:
Base() { cout<<"\n Base C'ctor\n"; }
};
class Derived1 : public Base
{
public:
Derived1() { cout<<"\n Derived1 C'ctor\n"; }
};
class Derived2 : public Base
{
public:
Derived2() { cout<<"\n Derived2 C'ctor\n"; }
};
class Derived3 : public Derived1, virtual public Derived2
{
public:
Derived3() { cout<<"\n Derived3 C'ctor\n"; }
};
main()
{
Derived3 d;
}
Output :
Base C'ctor
Derived2 C'ctor
Base C'ctor
Derived1 C'ctor
Derived3 C'ctor
Example3:
class Base
{
public:
Base() { cout<<"\n Base C'ctor\n"; }
};
class Derived1 : public Base
{
public:
Derived1() { cout<<"\n Derived1 C'ctor\n"; }
};
class Derived2 : virtual public Base
{
public:
Derived2() { cout<<"\n Derived2 C'ctor\n"; }
};
class Derived3 : public Derived1, public Derived2
{
public:
Derived3() { cout<<"\n Derived3 C'ctor\n"; }
};
main()
{
Derived3 d;
}
Output:
Base C'ctor
Base C'ctor
Derived1 C'ctor
Derived2 C'ctor
Derived3 C'ctor
Link:
http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=169
Pointer-to-Pointer and Reference-to-Pointer
Why we need Pointer-to-Pointer and Reference-to-Pointer ???
When we use "pass by pointer" to pass a pointer to a function, only a copy of the pointer is passed to the function. We can say "pass by pointer" is passing a pointer using "pass by value." In most cases, this does not present a problem. But, a problem arises when you modify the pointer inside the function. Instead of modifying the variable, it points to it by de-referencing. When you modify the pointer, you are only modifying a copy of the pointer and the original pointer remains unmodified.
Link:
http://www.codeguru.com/cpp/cpp/cpp_mfc/pointers/article.php/c4089/
Private Destructors
Why Private Destructors ???
1. While using Reference Counting Objects
The simple answer – Do n't want anyone to be able to destroy the object.
A reference counting object is, its an object that tracks the number of references to itself, and destroys itself when none of the references point toward it. This is because you don't own the lifetime of the reference. Making it simple, the object may be in use by more than one reference simultaneously. Imagine the situation in which the destructor of this object is public and as one of the refrence is released which innocently calls the destructor which is public and destroys the object. This could result in situations like the object being destroyed and other references still pointing to our dead object. To avoid such situations we make the destructor private and provide alternate function which will be careful enough to invoke the destructor only if the reference count is ZERO.
2. Make sure objects are created only on Heap not Stack
We can achieve this by making destructor private. There by preventing the stack unwinding to happen, which will intern avoid creating variables on stack. Then the user will only be able to create object on heap.
3. When you want to create non inheritable-Final Classes
It could happens that you want your class not to be inherited by any other class i.e., to create a final class. For making a class final all you have to do is make the class’s constructor or destructor private. Since in a class there can be any number of constructors where as it can have only one destructor, it’s easy to keep the single destructor as private and make the class Final.
Links:
http://blogs.msdn.com/larryosterman/archive/2005/07/01/434684.aspx
http://prabhagovind.wordpress.com/2006/12/21/some-thing-about-private-destructors/
http://prashanth-cpp.blogspot.com/2007/01/delete-this.html
URL Encoding
Uniform Resource Locators (URL) specification
The specification for URLs (RFC 1738, Dec. '94) poses a problem, in that it limits the use of allowed characters in URLs to only a limited subset of the US-ASCII character set: "...
Only alphanumerics [0-9a-zA-Z], the special characters "$-_.+!*'()," [not including the quotes - ed], and reserved characters used for their reserved purposes may be used unencoded within a URL."
HTML, on the other hand, allows the entire range of the ISO-8859-1 (ISO-Latin) character set to be used in documents - and HTML4 expands the allowable range to include all of the Unicode character set as well. In the case of non-ISO-8859-1 characters (characters above FF hex/255 decimal in the Unicode set), they just can not be used in URLs, because there is no safe way to specify character set information in the URL content yet [RFC2396.]
URLs should be encoded everywhere in an HTML document that a URL is referenced to import an object (A, APPLET, AREA, BASE, BGSOUND, BODY, EMBED, FORM, FRAME, IFRAME, ILAYER, IMG, ISINDEX, INPUT, LAYER, LINK, OBJECT, SCRIPT, SOUND, TABLE, TD, TH, and TR elements.)
How are characters URL encoded?
URL encoding of a character consists of a "%" symbol, followed by the two-digit hexadecimal representation (case-insensitive) of the ISO-Latin code point for the character.
Example:
1. Space = decimal code point 32 in the ISO-Latin set.
2. 32 decimal = 20 in hexadecimal
3. The URL encoded representation will be "%20"
DBA Morning Checklist
Link : http://www.sqlservercentral.com/articles/Database+Administration/62480/
Backups -
- Verify that the Network Backups are good by checking the backup emails. If a backup did not complete, contact in the networking group, and send an email to the DBA group.
- Check the SQL Server backups. If a backup failed, research the cause of the failure and ensure that it is scheduled to run tonight.
- Check the database backup run duration of all production servers. Verify that the average time is within the normal range. Any significant increases in backup duration times need to be emailed to the networking group, requesting an explanation. The reason for this is that networking starts placing databases backups to tape at certain times, and if they put it to tape before the DBAs are done backing up, the tape copy will be bad.
- Verify that all databases were backed up. If any new databases were not backed up, create a backup maintenance plan for them and check the current schedule to determine a backup time.
Disk Space- Verify the free space on each drive of the servers. If there is significant variance in free space from the day before, research the cause of the free space fluctuation and resolve if necessary. Often times, log files will grow because of monthly jobs.
Job Failures- Check for failed jobs, by connecting to each SQL Server, selecting "job activity" and filtering on failed jobs. If a job failed, resolve the issue by contacting the owner of the job if necessary.
System Checks
- Check SQL logs on each server. In the event of a critical error, notify the DBA group and come to an agreement on how to resolve the problem.
- Check Application log on each server. In the event of a critical or unusual error, notify the DBA group and the networking group to determine what needs to be done to fix the error.
Performance
- Check Performance statistics for All Servers using the monitoring tool and research and resolve any issues.
- Check Performance Monitor on ALL production servers and verify that all counters are within the normal range.
Connectivity
- Log into the Customer application and verify that it can connect to the database and pull up data. Verify that it is performing at an acceptable speed. In the event of a failure, email the Customer Support Group, DBA group, and the DBA manager, before proceeding to resolve the issue.
- Log into the Billing application and verify that it can connect to the database and pull up data. Verify that it is performing at an acceptable speed. In the event of a failure, email the Billing Support Group, DBA group, and the DBA manager, before proceeding to resolve the issue.
Replication
- Check replication on each server by checking each publication to make sure the distributor is running for each subscription.
- When replication is stopped, or changes to replication are made, send an email to the DBA group. For example, if the DBA stops the distributor, let the other DBAs know when it is stopped and then when it is restarted again.
- Check for any emails for the SQL Jobs that monitor row counts on major tables on the publisher and subscriber. If a wide variance occurs, send an email message to the DBAs and any appropriate IS personnel.
FMEA and FishBone Analysis
FMEA - Spotting problems before a solution is implemented
FishBone Analysis - Identifying the Likely Causes of Problems
Failure Mode and Effects Analysis (FMEA)
Spotting problems before a solution is implemented when things go badly wrong, it's easy to say with hindsight, "We should have known that would happen". And with a little foresight, perhaps, problems could have been avoided if only someone had asked "What Could Go Wrong?"
By looking at all the things that could possibly go wrong at design stage, you can cheaply solve problems that would otherwise take vast effort and expense to correct, if left until the solution has been deployed in the field. Failure Modes and Effects Analysis (FMEA) helps you do this.
More than this, FMEA provides a useful approach for reviewing existing processes or systems, so that problems with these can be identified and eliminated.
Understanding FMEA
FMEA grew out of systems engineering, and is a widely-used tool for quality control. It builds on tools like Risk Analysis and Cause and Effect Analysis to try to predict failures before they happen. Originally used in product development, it is also effective in improving the design of business processes and systems.
Link : http://www.mindtools.com/pages/article/newTMC_82.htm
Cause & Effect Diagrams
Identifying the Likely Causes of Problems
Related variants: Fish or Fishbone Diagrams, and Ishikawa Diagrams
Cause and Effect Diagrams help you to think through causes of a problem thoroughly. Their major benefit is that they push you to consider all possible causes of the problem, rather than just the ones that are most obvious.
The approach combines brainstorming with use of a type of concept map.
Cause and Effect Diagrams are also known as Fishbone Diagrams, because a completed diagram can look like the skeleton of a fish.
Link: http://www.mindtools.com/pages/article/newTMC_03.htm
Risk Analysis & Risk Management
Evaluating and Managing the Risks You Face
Almost everything we do in today's business world involves a risk of some kind: customer habits change, new competitors appear, factors outside your control could delay your project. But formal risk analysis and risk management can help you to assess these risks and decide what actions to take to minimize disruptions to your plans. They will also help you to decide whether the strategies you could use to control risk are cost-effective.
How to use the tool:
Here we define risk as 'the perceived extent of possible loss'. Different people will have different views of the impact of a particular risk – what may be a small risk for one person may destroy the livelihood of someone else.
One way of putting figures to risk is to calculate a value for it as:
Risk = probability of event x cost of event
Doing this allows you to compare risks objectively. We use this approach formally in decision making with Decision Trees.
CMM
CMM - Capability Maturity Model
Structure of the CMM
The CMM involves the following aspects:
Maturity Levels: A 5-Level process maturity continuum - where the uppermost (5th) level is a notional ideal state where processes would be systematically managed by a combination of process optimization and continuous process improvement.
Key Process Areas: A Key Process Area (KPA) identifies a cluster of related activities that, when performed collectively, achieve a set of goals considered important.
Goals: The goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area.
Common Features: Common features include practices that implement and institutionalize a key process area. There are five types of common features: Commitment to Perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation.
Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the KPAs.
Levels of the CMM:
There are five levels defined along the continuum of the CMM, and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief."
The levels are:
Level 1 - Ad hoc (Chaotic)
It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.
Level 2 - Repeatable
It is characteristic of processes at this level that some processes are repeatable, possibly with consistent results.Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.
Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.
Level 5 - Optimized
It is characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements.
At maturity level 5, processes are concerned with addressing statistical common causes of process variation and changing the process (for example, shifting the mean of the process performance) to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative process-improvement objectives.
Links:
http://www.sei.cmu.edu/cmm/
http://en.wikipedia.org/wiki/Capability_Maturity_Model
ITIL vs SIXSIGMA
ITIL essentially provides a clearly defined structure for delivering and supporting IT-based services.
Six Sigma is a quality-management process based on statistical measurements used to drive quality improvement while reducing operational costs.
The ITIL structure is a framework to deliver and support IT-based services. SLM, by definition, is the process of defining and then managing IT service delivery to a standard of quality. Six Sigma fits well with this because it creates a way to tangibly measure the service that can either formally be built into service-level agreements (SLA) or informally within the organizational structure.
ITIL defines a framework for IT Service Management. It consists of a set of guidelines, based on industry best practices, that specify what an IT organization should do. ITIL does not, however, define how to do it. For example, ITIL specifies that IT should allocate a priority for each incident that comes into the service desk. But, it does not specify how to allocate those priorities.
With ITIL, it's up to the IT staff to flesh out the details of process flow, and create detailed work instructions, all in a way that makes sense for their organization.
Six Sigma, on the other hand, defines a specific process, based on statistical measurement, that drives quality improvement and reduces operational costs. It helps in developing detailed work instructions, and it defines a methodology for continually mapping, measuring, and improving the quality process. Six Sigma tells you how, but doesn't tell you what. This approach does not specify any best practices specifically for IT Service Management.
In summary then, ITIL defines the "what" of service management, and Six Sigma defines the "how" of quality improvement. Together, they are a perfect fit for improving the quality of IT service delivery and support.
Links:
SixSigma -http://www.isixsigma.com/
ISO 9001 , ITIL , Sixsigma - http://www.thinkhdi.com/library/deliverfile.aspx?filecontentid=526
Combining ITIL & SixSigma - http://documents.bmc.com/products/documents/67/60/46760/46760.pdf
Use Sixsigma to complement ITIL v3 - http://www.eweek.com/c/a/Knowledge-Center/How-to-Use-Six-Sigma-to-Complement-ITIL-v3/
Quality Methods / CMM / ITIL / Six Sigma - http://www.sourcingmag.com/outsourcing_tactics/quality_methods_cmm_itil_six_sigma.html
Use Fishbone to solve complex problems - http://blogs.techrepublic.com.com/tech-manager/?p=561&tag=nl.e053
Communication is the key to controlling project chaos - http://blogs.techrepublic.com.com/tech-manager/?p=544&tag=nl.e053
Manage project time requirements with these methods - http://blogs.techrepublic.com.com/tech-manager/?p=548&tag=nl.e053
http://www.nextslm.org/itil/itil_sigma.htm
MFC
Suppose you add a menu item that will send the ID_MY_COMMAND command message to the MDI main frame of your application:
1. The command is first routed to the main frame, which will check the active child frame first.
2. The child frame will first check the active view, which checks its own message map before routing the command to the associated document.
3. The document will check its own message map before checking the message map of the associated document template.
4. Going back to the child frame's routing, the child frame will check its own message map.
5. Going back to the main frame's routing, the main frame will check its own message map.
6. Ultimately, the message map of the application object is checked, where an entry for your message is found and the appropriate handler is called.
If you find that you must use a different command routing scheme, perhaps to include your own special command target classes, you can do so by overriding the OnCmdMsg()member of CCmdTarget. This may involve overriding OnCmdMsg() for several classes and is beyond the scope of this book; for more information, see Command Routing in the MFC online documentation.
One of the most remarkable features of the document/view architecture is that an application can handle command messages almost anywhere. Command messages is MFC's term for the WM_COMMAND messages that are generated when items are selected from menus, keyboard accelerators are pressed, and toolbar buttons are clicked. The frame window is the physical recipient of most command messages, but command messages can be handled in the view class, the document class, or even the application class by simply including entries for the messages you want to handle in the class's message map. Command routing lets you put command handlers where it makes the most sense to put them rather than relegate them all to the frame window class. Update commands for menu items, toolbar buttons, and other user interface objects are also subject to command routing, so you can put ON_UPDATE_COMMAND_UI handlers in nonframe window classes as well.
CFrameWnd::OnCmdMsg first routes the message to the active view by calling the view's OnCmdMsg function. If pView->OnCmdMsg returns 0, indicating that the view didn't process the message (that is, that the view's message map doesn't contain an entry for this particular message), the frame window tries to handle the message itself by calling CWnd::OnCmdMsg. If that, too, fails, the frame window then tries the application object. Ultimately, if none of the objects processes the message, CFrameWnd::OnCmdMsg returns FALSE and the framework passes the message to ::DefWindowProc for default processing.
This explains how a command message received by a frame window gets routed to the active view and the application object, but what about the document object? When CFrameWnd::OnCmdMsg calls the active view's OnCmdMsg function, the view first tries to handle the message itself. If it doesn't have a handler for the message, the view calls the document's OnCmdMsg function. If the document can't handle the message, it passes it up the ladder to the document template. Figure 9-2 shows the path that a command message travels when it's delivered to an SDI frame window. The active view gets first crack at the message, followed by the document associated with that view, the document template, the frame window, and finally the application object. The routing stops if any object along the way processes the message, but it continues all the way up to ::DefWindowProc if none of the objects' message maps contains an entry for the message. Routing is much the same for command messages delivered to MDI frame windows, with the framework making sure that all the relevant objects, including the child window frame that surrounds the active MDI view, get the opportunity to weigh in.
The value of command routing becomes apparent when you look at how a typical document/view application handles commands from menus, accelerators, and toolbar buttons. By convention, the File-New, File-Open, and File-Exit commands are mapped to the application object, where CWinApp provides OnFileNew, OnFileOpen, and OnAppExit command handlers for them. File-Save and File-Save As are normally handled by the document object, which provides default command handlers named CDocument::OnFileSave and CDocument::OnFileSaveAs. Commands to show and hide toolbars and status bars are handled by the frame window using CFrameWnd member functions, and most other commands are handled by either the document or the view.
An important point to keep in mind as you consider where to put your message handlers is that only command messages and user interface updates are subject to routing. Standard Windows messages such as WM_CHAR, WM_LBUTTONDOWN, WM_CREATE, and WM_SIZE must be handled by the window that receives the message. Mouse and keyboard messages generally go to the view, and most other messages go to the frame window. Document objects and application objects never receive noncommand messages.
Links:
MFC Messages and Commands Routing -
http://www.informit.com/library/content.aspx?b=Visual_C_PlusPlus&seqNum=45
Modal Dialog Box Prevents Calls to PreTranslateMessage - http://support.microsoft.com/kb/126874
MFC Tech Notes - http://msdn.microsoft.com/en-us/library/azt48yaw(VS.71).aspx
MFC Command Routing - http://www.ezdoum.com/upload/MFCMessageRouting.pdf
2. MFC SubClassing
Subclassing is a standard technique in Windows programming for customizing the behavior of a window. MFC wrap subclassing into virtual function overriding.
Subclassing is a technique that allows an application to intercept and process messages sent or posted to a particular window before the window has a chance to process them. This is typically done by replacing the Window Procedure for a window with application-defined window procedure.
The term subclassing is different from the object-oriented subclass term and really means use this class to handle the Windows messages for this control. This subclassing normally is performed during the first pass of DoDataExchange() from inside a DDX_Control() routine. You must pass the control ID and a valid parent dialog box pointer to SubclassDlgItem().
To manually subclass the new custom edit box (m_ceditCustomEdit), for example, your OnInitDialog() function may look like this:
BOOL CCustomDlg::OnInitDialog(){ CDialog::OnInitDialog(); m_ceditCustomEdit.SubclassDlgItem(IDC_CUSTOM_EDIT,this); return TRUE;}WARNING
You must call SubclassDlgItem() only after your dialog box and control window have valid window handles. This means placing the subclass call after the call to the OnInitDialog() base class.
If the subclassing succeeds, SubclassDlgItem() returns TRUE. If you already know the HWND handle of the control, you can call SubclassWindow() from the dialog box instead, passing the control's window handle. SubclassWindow() also returns TRUE if the subclassing was successful.
You can add an override for the PreSubclassWindow() virtual function in your derived control class. Your PreSubclassWindow() override is called just before the control's messages are hooked into your derived class's message map. This lets you perform dynamic changes to the subclassing procedure or just some last-minute initialization of your new control-handler class.
You can call UnsubclassWindow() to make the control revert to using the original default (CWnd) handler object.
MFC ActiveX Controls: Subclassing a Windows Control :
This article describes the process for subclassing a common Windows control to create an ActiveX control. Subclassing an existing Windows control is a quick way to develop an ActiveX control. The new control will have the abilities of the subclassed Windows control, such as painting and responding to mouse clicks. The MFC ActiveX controls sample BUTTON is an example of subclassing a Windows control.
To subclass a Windows control, complete the following tasks:
1. Override the IsSubclassedControl and PreCreateWindow member functions of COleControl
2. Modify the OnDraw member function
3. Handle any ActiveX control messages (OCM) reflected to the control
Links:
http://www.informit.com/library/content.aspx?b=Visual_C_PlusPlus&seqNum=63
MFC & ActiveX Control - http://msdn.microsoft.com/en-us/library/9s2s80tk(VS.80).aspx
3. OnOpenDocument vs OnNewDocument
Generally speaking, MFC applications more commonly override OnNewDocument than OnOpenDocument. Why? Because OnOpenDocument indirectly calls the document's Serialize function, which initializes a document's persistent data members with values retrieved from a document file. Only nonpersistent data members—those that aren't initialized by Serialize—need to be initialized in OnOpenDocument. OnNewDocument, by contrast, performs no default initialization of the document's data members. If you add data members to a document class and want those data members reinitialized whenever a new document is created, you need to override OnNewDocument.
Before a new document is created or opened, the framework calls the document object's virtual DeleteContents function to delete the document's existing data. Therefore, an SDI application can override CDocument::DeleteContents and take the opportunity to free any resources allocated to the document and perform other necessary cleanup chores in preparation for reusing the document object. MDI applications generally follow this model also, although MDI document objects differ from SDI document objects in that they are individually created and destroyed as the user opens and closes documents.
4. SDI and MDI
MFC makes it easy to work with both single-document interface (SDI) and multiple-document interface (MDI) applications.
SDI applications allow only one open document frame window at a time. MDI applications allow multiple document frame windows to be open in the same instance of an application.
An MDI application has a window within which multiple MDI child windows, which are frame windows themselves, can be opened, each containing a separate document. In some applications, the child windows can be of different types, such as chart windows and spreadsheet windows. In that case, the menu bar can change as MDI child windows of different types are activated.
5. Device Contexts in MFC
Device contexts are Win32 objects; they are represented by HDC device-context handles. The MFC provides wrapper classes for the device-context object as the CDC base class, as well as a number of more specialized derived classes.
The basic CDC class is huge and supports all the GDI drawing functions, coordinate mapping functions, clipping functions, font-manipulation and rendering functions, printer-specific functions, path-tracking functions, and metafile-playing functions.
The CDC base class encapsulates all the device-context functionality and drawing functions that use a Win32 HDC object. The actual Win32 device-context handle is accessible via the public m_hDC member. You can retrieve this handle with the device context's GetSafeHdc() function.
You often will be handed a pointer to an initialized CDC object from MFC framework functions, such as CView::OnDraw() and CView::OnBeginPrinting(). These objects are nicely clipped to the dimensions of the window client area so that the results of drawing functions do not appear outside the area of the window.
You also can obtain a pointer to a CDC object for the client area of a window using the CWnd::GetDC() function. If you want a CDC pointer for the entire window area (including the title bar and borders), you can use the CWnd::GetWindowDC() function instead. You can even get a pointer to the entire Windows desktop by calling GetDesktopWindow()->GetWindowDC().
Links:
Windows GDI Tutorial - http://www.codeproject.com/KB/graphics/gditutorial.aspx
Inform IT Tutorial - http://www.informit.com/library/content.aspx?b=Visual_C_PlusPlus&seqNum=67
6. STA vs MTA
What is a multi-threaded apartment (MTA)? Single-threaded apartment (STA).
Anyway, apartments were introduced by Microsoft in NT 3.51 and late Windows 95 to isolate the problem of running legacy non-thread safe code into multithreaded environment. Each thread was "encapsulated" into so called single-threaded apartment. The reason to create an object in apartment is thread-safety. COM is responsible synchronize access to the object even if the object inside of the apartment is not thread-safe. Multithreaded apartments (MTA, or free threading apartment) were introduced in NT 4.0. Idea behind MTA is that COM is not responsible to synchronize object calls between threads.
In MTA the developer is responsible for that. See "Professional DCOM Programming" of Dr. Grimes et al. or "Essential COM" of Don Box for the further discussion on this topic.
Link : http://www.techinterviews.com/?p=103
7. MFC Msic
a. In a dialog based Application, we can use the following API to get handle to active Device Context.
CDC *pDC = GetDC();
b. We can use the following API to repaint a particular area(rectangle) on the screen
InvalidateRect()
c. MFC IPicture COM Interface can be used to load JPEG, GIF ( not PNG)
d. Image Processing Libraries like CxImage can be used to load JPEG, GIF & PNG
e. WM_TIMER(OnTimer) – MFC Timers can be used to get the Animation look if we have images(BITMAP,JPEG … ) ; Catch is to loadImages with an interval.
f. Loading JPEG and GIF pictures - http://www.arstdesign.com/articles/picloader.html
Add GIF-animation to your MFC and ATL projects with the help of CPictureEx and CPictureExWnd - http://www.codeproject.com/KB/graphics/pictureex.aspx
VC++ Setting WallPaper : http://www.codeproject.com/KB/applications/wallpaperq.aspx
Product/Application Development on Windows CE Platform
ARMV4I
MIPSII
MIPSII_FP
MIPSIV
MIPSIV_FP
SH4
x86
(http://eleves.ec-lille.fr/~couprieg/index.php?2008/06/17/39-first-issues-when-porting-an-application-on-windows-ce )
There's no errno.h,signal.h for Windows CE.
Don't have SignalObjectAndWait, and it's painful.