Thursday, July 23, 2009

Database transaction

A database transaction comprises a unit of work performed within a database management system (or similar system) against a database, and treated in a coherent and reliable way independent of other transactions. Transactions in a database environment have two main purposes:

1. To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system failure, when execution stops (completely or partially) and many operations upon a database remain uncompleted, with unclear status.
2. To provide isolation between programs accessing a database concurrently. Without isolation the programs' outcomes are typically erroneous.

A database transaction, by definition, must be atomic, consistent, isolated and durable. Database practitioners often refer to these properties of database transactions using the acronym ACID.

Transactions provide an "all-or-nothing" proposition, stating that each work-unit performed in a database must either complete in its entirety or have no effect whatsoever. Further, the system must isolate each transaction from other transactions, results must conform to existing constraints in the database, and transactions that complete successfully must get written to durable storage.

Thursday, July 09, 2009

NetApp Named MS 2009 Storage Solutions Partner


NetApp has been named the Microsoft 2009 Partner of the Year, in the advanced infrastructure, storage solutions category. NetApp was chosen out of an international field of top Microsoft partners for delivering market-leading customer solutions for Microsoft Hyper-V environments.

NetApp was chosen on its partnership with Microsoft and complete solutions that help reduce customers' costs, maximize storage efficiency, and improve availability in virtual environments. With NetApp, Microsoft customers can reduce the amount of storage they need by at least 50 percent when they also use NetApp technologies such as thin provisioning and deduplication, the company said.

NetApp utilizes a variety of Windows Server platform technologies to improve storage system management and streamline backup, recovery, and remote replication in Windows Server 2008 Hyper-V environments. In addition, tight integration with the Microsoft System Center family of products and additional application-integrated NetApp products help maximize uptime for a wide variety of application environments, including Microsoft Exchange Server, SQL Server, and SharePoint Server. Since NetApp storage solutions are tightly integrated with Microsoft's technologies, customers are backed by NetApp's global customer support infrastructure, which integrates Microsoft Premier Support.

"We are very excited about working with NetApp to deliver innovative end-to-end solutions to our joint customers. The combination of our technology with NetApp's storage solutions gives our customers the tools they need to improve efficiency, reduce costs, and drive their businesses forward," said Kim Akers, General Manager (Global Partner Team), Microsoft.

Rajesh Janey, President (Sales), India & SAARC, NetApp, said, "Being chosen as Microsoft's Storage Partner of the Year is a great honor for NetApp. While this award establishes NetApp as the leading storage solutions provider for Microsoft virtualization customers, more than anything it underlines the customer success our partnership and close collaboration has delivered."

Wednesday, July 01, 2009

Native MySQL Storage Engines

The MySQL offers a number of within developed storage engines that are well-suited for data warehouses, with the mainly popular being the evasion MyISAM storage engine. The MyISAM engine delivers rapid data loading capabilities, fast study times, and much more for data warehousing users. Typical MyISAM support for data warehouse volumes range up to 1TB contentedly. MySQL offers other storage engines that can also be used for data warehousing as well. MySQL supports these key data warehousing features:

  • Data/Index partitioning (range, hash, key, list, composite) in MySQL 5.1 and above
  • No practical storage limits with automatic storage management
  • Built-in Replication
  • Strong indexing support (B-tree, fulltext, clustered, hash, GIS)
  • Multiple, configurable data/index caches
  • Pre-loading of data into caches
  • Unique query cache (caches result set + query; not just data)
  • Parallel data load
  • Multi-insert DML
  • Read-only tables
  • Cost-based optimizer
  • Wide platform support
Infobright

Infobright offers a storage engine for the MySQL Server that is tailor-made for large scale, analytic-styled data warehousing. Infobright enables MySQL users to move up to data warehouses that support data volumes of 1-10TB or more with these key capabilities:

  • Column-oriented design
  • High data compression capabilities
  • Advanced optimizer with "Knowledge Grid"
  • High-speed loader

Friday, June 19, 2009

MySQL to Microsoft SQL database Converter

The MySQL to MSSQL Database Converter value has an easy, read only, quick responding explanation for converting MySQL database proceedings into Microsoft SQL database format. MySQL server to Microsoft SQL server database migration utility does not change the table, rows or columns of the source database and maintain the MySQL database file integrity during the database migration process.

Advanced database translation software is purposely designed for professionals who need quick and reliable database conversion and support the entire database Architecture, Schemas, Default Values, Primary key, and other database attributes. The cost-effective, secure database converter even provides you an option to save the converted MSSQL database format at user defined location.


Microsoft SQL database generator is a prominent program having a good looking graphical interface and has an exclusive do-it-yourself feature with no technical learning required while working on the software. Download the freeware demo of MySQL to Microsoft SQL database converter utility to understand the effective functionality of the software. On getting satisfactory results purchase the full version from our website.

Software Features

  • Powerful and versatile MSSQL to MySQL database converter suite with simple and fast working.
  • Supports all database Key Constraints, Data Types, Schemas, Attributes, Tables, Rows, etc. even after MySQL database conversion.
  • Specifically designed for database developers and programmers since save time by converting single or multiple database records.
  • Far easier to use and supports all versions of MSSQL and MySQL database.
  • Provides the option of saving the converted database records either at new location with new name or overwrites the existing database records.
  • Have an impressive user friendly graphical interface and an exclusive do-it-yourself feature.
  • Windows compatible database converting wizard smoothly runs on all windows platform.

Advantages

  • Time-saving, reliable and easy to use database conversion utility.
  • Reduces the labor cost since automatically converts the MySQL database records.
  • No database scripting or encoding is required for database conversion.
  • Efficiently works with all the MySQL and MSSQL database versions.
  • An alternative solution for the complicated MySQL to MSSQL database converting programs.

System Requirement

  • Pentium-class or equivalent processor
  • RAM (128 MB recommended)
  • 10 MB of free space

Supported Operating System:

Windows 98/ME/NT/2000/2003/XP/Vista

Monday, May 25, 2009

microsoft sql

The code base for MS SQL Server (prior to version 7.0) originated in Sybase SQL Server, and was Microsoft's entry to the enterprise-level database market, competing against Oracle, IBM, and, later, Sybase itself. Microsoft, Sybase and Ashton-Tate originally teamed up to create and market the first version named SQL Server 1.0 for OS/2 (about 1989) which was essentially the same as Sybase SQL Server 3.0 on Unix, VMS, etc. Microsoft SQL Server 4.2 was shipped around 1992 (available bundled with Microsoft OS/2 version 1.3). Later Microsoft SQL Server 4.21 for Windows NT was released at the same time as Windows NT 3.1. Microsoft SQL Server v6.0 was the first version designed for NT, and did not include any direction from Sybase.About the time Windows NT was released, Sybase and Microsoft parted ways and each pursued their own design and marketing schemes. Microsoft negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Later, Sybase changed the name of its product to Adaptive Server Enterprise to avoid confusion with Microsoft SQL Server. Until 1994, Microsoft's SQL Server carried three Sybase copyright notices as an indication of its origin.

Since parting ways, several revisions have been done independently. SQL Server 7.0 was a rewrite from the legacy Sybase code. It was succeeded by SQL Server 2000, which was the first edition to be launched in a variant for the IA-64 architecture.In the eight years since release of Microsoft's previous SQL Server product (SQL Server 2000), advancements have been made in performance, the client IDE tools, and several complementary systems that are packaged with SQL Server 2005. These include: an ETL tool (SQL Server Integration Services or SSIS), a Reporting Server, an OLAP and data mining server (Analysis Services), and several messaging technologies, specifically Service Broker and Notification Services.

Monday, May 04, 2009

Integrated Circuit


In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material. Integrated circuits are used in almost all electronic equipment in use today and have revolutionized the world of electronics.

A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board.
This article is about monolithic integrated circuits.

Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.

There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Furthermore, much less material is used to construct a circuit as a packaged IC die than as a discrete circuit. Performance is high since the components switch quickly and consume little power (compared to their discrete counterparts), because the components are small and close together. As of 2006, chip areas range from a few square mm to around 350 mm², with up to 1 million transistors per mm².

Thursday, April 16, 2009

Old IBM Personal Computer


The IBM Personal Computer, commonly known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform. It is IBM model number 5150, and was introduced on August 12, 1981. It was created by a team of engineers and designers under the direction of Don Est ridge of the IBM Entry Systems Division in Boca Raton, Florida.

Alongside "microcomputer" and "home computer", the term "personal computer" was already in use before 1981. It was used as early as 1972 to characterize Xerox PARC's Alto. However, because of the success of the IBM Personal Computer, the term came to mean more specifically a microcomputer compatible with IBM's PC products.

Monday, April 06, 2009

Keep Your System Faster


Follow these tips and you will definitely have a much faster and more reliable PC! Most of the below tips :

1. Wallpapers: They slow your whole system down, so if you're willing to compromise, have a basic plain one instead!

2. Drivers: Update your hardware drivers as frequently as possible. New drivers tend to increase system speed especially in the case of graphics cards, their drivers are updated by the manufacturer very frequently!

3. Minimizing: If you want to use several programs at the same time then minimize those you are not using. This helps reduce the overload on RAM.

Monday, March 30, 2009

Boot your Computer Faster

Press start->run then type msconfig and press enter.Go to the startup tab. Here you will see a list of startup items. These are all the programs that automatically start when you boot your PC. It is these that slow down the boot up process.So uncheck all the unwanted items like ms-office, messengers other utilities that u may not need at startup). Don't uncheck your antivirus software.Restart your Pc to and see for yourself, your pc will now boot faster

A great new feature in Microsoft Windows XP is the ability to do a boot defragment. This places all boot files next to each other on the disk to allow for faster booting. By default this option is enabled, but on some systems it is not, so below is the information on how to turn it on:

Go to Start Menu and Click Run
Type in regedit then click ok
Find "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Dfrg\BootOpt imizeFunction"
Select "Enable" from the list on the right
Right on it and select "Modify"
Change the value to Y .
Reboot your Pc and see the change yourself.

Monday, March 23, 2009

Server in Hardware

Hardware requirements for servers vary, depending on the server application. Absolute CPU speed is not usually as critical to a server as it is to a desktop machine. Servers' duties to provide service to many users over a network lead to different requirements like fast network connections and high I/O throughput. Since servers are usually accessed over a network they may run in headless mode without a monitor or input device. Processes which are not needed for the server's function are not used. Many servers do not have a graphical user interface (GUI) as it is unnecessary and consumes resources that could be allocated elsewhere. Similarly, audio and USB interfaces may be omitted.

Servers often run for long periods without interruption and availability must often be very high, making hardware reliability and durability extremely important. Although servers can be built from commodity computer parts, mission-critical servers use specialized hardware with low failure rates in order to maximize uptime.

For example, servers may incorporate faster, higher-capacity hard drives, larger computer fans or water cooling to help remove heat, and uninterruptible power supplies that ensure the servers continue to function in the event of a power failure. These components offer higher performance and reliability at a correspondingly higher price. Hardware redundancy—installing more than one instance of modules such as power supplies and hard disks arranged so that if one fails another is automatically available—is widely used. ECC memory devices which detect and correct errors are used; non-ECC memory can cause data corruption

Monday, March 16, 2009

FTP Server -File Transfer Protocol

File Transfer Protocol (FTP) is a network protocol used to transfer data from one computer to another through a network such as the Internet.FTP is a file transfer protocol for exchanging and manipulating files over a TCP computer network. An FTP client may connect to an FTP server to manipulate files on that server.

In active mode, the FTP client opens a dynamic port, sends the FTP server the dynamic port number on which it is listening over the control stream and waits for a connection from the FTP server. When the FTP server initiates the data connection to the FTP client it binds the source port to port 20 on the FTP server.

In passive mode, the FTP server opens a dynamic port, sends the FTP client the server's IP address to connect to and the port on which it is listening (a 16-bit value broken into a high and low byte, as explained above) over the control stream and waits for a connection from the FTP client. In this case, the FTP client binds the source port of the connection to a dynamic port.

Monday, March 09, 2009

Adaptive Server Enterprise

Adaptive Server Enterprise (ASE) is Sybase Corporation's flagship enterprise-class relational database management system product. ASE is predominantly used on the UNIX platform but is also available for Windows.

Originally created for UNIX platforms in 1987, Sybase Corporation's primary relational database management system product was initially marketed under the name Sybase SQL Server. In 1988, SQL Server for OS/2 was co-developed for the PC by Sybase, Microsoft, and Ashton-Tate. Ashton-Tate divested its interest and Microsoft became the lead partner after porting SQL Server to Windows NT.

Microsoft and Sybase sold and supported the product through version 4.21. In 1993 the co-development licensing agreement between Microsoft and Sybase ended and the companies parted ways while continuing to develop their respective versions of the software.

In 1995, Sybase released SQL Server 11.0. Starting with version 11.5, released in 1996, Sybase moved to differentiate its product from Microsoft SQL Server by renaming it to Adaptive Server Enterprise.

Sybase provides native low-level programming interfaces to its database server which uses a protocol called Tabular Data Stream. Prior to version 10, DBLIB (Data Base Library) was used. Version 10 and onwards uses, CTLIB (Client Library).

Monday, March 02, 2009

SQL Server Express

Microsoft SQL Server Express is the freely-downloadable and distributable version of Microsoft's SQL Server relational database management system. It offers a database solution specifically targeted for embedded and smaller-scale applications. Unlike its predecessor, MSDE, there is no concurrent workload governor which “limits performance if the database engine receives more work than is typical of a small number of users." It does, however, have a number of technical restrictions which make it undesirable for large-scale deployments, including:

* Maximum database size of 4 GB per database (compared to 2 GB in the former MSDE). The 4 GB limit is per database (log files excluded) and can be extended in some scenarios through the use of multiple interconnected databases.
* Hardware utilization limits:
o Single physical CPU, multiple cores
o 1 GB of RAM (runs on any size RAM system, but uses only 1 GB)
* Absence of SQL Server Agent Service

Although its predecessor, MSDE, was virtually devoid of basic GUI management tools, SQL Server Express includes several GUI tools for database management. Among these tools are:

* SQL Server Management Studio Express
* SQL Server Configuration Manager
* SQL Server Surface Area Configuration tool
* SQL Server Business Intelligence Development Studio.

Tuesday, February 24, 2009

Database -A collection of information

A collection of information organized in such a way that a computer program can quickly select desired pieces of data. You can think of a database as an electronic filing system.

Traditional databases are organized by fields, records, and files. A field is a single piece of information; a record is one complete set of fields; and a file is a collection of records. For example, a telephone book is analogous to a file. It contains a list of records, each of which consists of three fields: name, address, and telephone number.

An alternative concept in database design is known as Hypertext. In a Hypertext database, any object, whether it be a piece of text, a picture, or a film, can be linked to any other object. Hypertext databases are particularly useful for organizing large amounts of disparate information, but they are not designed for numerical analysis.

To access information from a database, you need a database management system (DBMS). This is a collection of programs that enables you to enter, organize, and select data in a database.

Friday, February 13, 2009

DEC Alpha

Alpha, originally known as Alpha AXP, was a 64-bit reduced instruction set computer (RISC) instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC), designed to replace the 32-bit VAX complex instruction set computer (CISC) ISA and its implementations. Alpha was implemented in microprocessors originally developed and fabricated by DEC. It was used in a variety of DEC workstations and servers, eventually forming the basis for almost their entire mid-to-upper-scale lineup. Several third-party vendors also produced Alpha systems, as well as PC compatible form factor motherboards.

Alpha supports both the OpenVMS (previously known as OpenVMS AXP) operating system and Tru64 UNIX (previously known as DEC OSF/1 AXP and Digital UNIX). Open source operating systems also run on the Alpha, notably Linux and BSD UNIX flavors (FreeBSD support ended as of 7.0). Microsoft supported the processor in Windows NT until NT 4.0 SP6 but did not extend Alpha support beyond RC1 of Windows 2000.

Monday, February 09, 2009

B-tree

In computer science, a B-tree is a tree data structure that keeps data sorted and allows searches, insertions, and deletions in logarithmic amortized time. Unlike self-balancing binary search trees, it is optimized for systems that read and write large blocks of data. It is most commonly used in databases and file systems.

In B-trees, internal (non-leaf) nodes can have a variable number of child nodes within some pre-defined range. When data is inserted or removed from a node, its number of child nodes changes. In order to maintain the pre-defined range, internal nodes may be joined or split. Because a range of child nodes is permitted, B-trees do not need re-balancing as frequently as other self-balancing search trees, but may waste some space, since nodes are not entirely full. The lower and upper bounds on the number of child nodes are typically fixed for a particular implementation. For example, in a 2-3 B-tree (often simply referred to as a 2-3 tree), each internal node may have only 2 or 3 child nodes.

Monday, February 02, 2009

XQuery

XQuery is a query language (with some programming language features) that is designed to query collections of XML data. It is semantically similar to SQL.

XQuery 1.0 was developed by the XML Query working group of the W3C. The work was closely coordinated with the development of XSLT 2.0 by the XSL Working Group; the two groups shared responsibility for XPath 2.0, which is a subset of XQuery 1.0. XQuery 1.0 became a W3C Recommendation on January 23, 2007.

"The mission of the XML Query project is to provide flexible query facilities to extract data from real and virtual documents on the World Wide Web, therefore finally providing the needed interaction between the Web world and the database world. Ultimately, collections of XML files will be accessed like databases".

Monday, January 26, 2009

Transact-SQL

Transact-SQL (T-SQL) is Microsoft's and Sybase's proprietary extension to SQL. Microsoft's implementation ships in the Microsoft SQL Server product. Sybase uses the language in its Adaptive Server Enterprise, the successor to Sybase SQL Server.

Transact-SQL enhances SQL with these additional features:

* Control-of-flow language
* Local variables
* Various support functions for string processing, date processing, mathematics, etc.
* Improvements [citation needed] to DELETE and UPDATE statements.

Monday, January 12, 2009

Visual Studio 97

Microsoft first released Visual Studio in 1997, bundling many of its programming tools together for the first time. Visual Studio 97 was released in two editions, Professional and Enterprise. It included Visual Basic 5.0 and Visual C++ 5.0, primarily for Windows programming; Visual J++ 1.1 for Java and Windows programming; and Visual Fox Pro 5.0 for database, specifically x Base programming. It introduced Visual Inter Dev for creating dynamically generated web sites using Active Server Pages. A snapshot of the Microsoft Developer Network library was also included.

Visual Studio 97 was Microsoft's first attempt at using the same development environment for multiple languages. Visual C++, Visual J++, Inter Dev, and the MSDN Library all used one environment, called Developer Studio. Visual Basic and Visual Fox Pro used separate environments.

Monday, January 05, 2009

Microsoft Visual Studio

Microsoft Visual Studio is an Integrated Development Environment (IDE) from Microsoft. It can be used to develop console and Graphical user interface applications along with Windows Forms applications, web sites, web applications, and web services in both native code together with managed code for all platforms supported by Microsoft Windows, Windows Mobile, Windows CE, .NET Framework, .NET Compact Framework and Microsoft Silver light.

Visual Studio includes a code editor supporting IntelliSense as well as code re factoring. The integrated debugger works both as a source-level debugger and a machine-level debugger.