Website Accessibility and Usability

Usability is one of the most pressing issues in the field of website development nowadays. The usability of a website is tested against its simplicity which makes it easy for people to navigate the site as fast as possible, therefore making access to information easier.

Accessibility is a concept that is intertwined with the concept of usability. It refers to creating the website content available to all people.

Context

The issue has caught the attention of different sectors of society.  Why? Because 1 out of 5 people in America possess some kind of disability and this figure translates to around 30 million Americans. The figure is still increasing, with the coming of age of senior citizens. During the past decade alone, a dramatic increase of 25% was seen.

Why the Internet?

One might ask, “Why is the Internet a central focus in this issue of usability?” The Internet has transformed the lives of people during the past decade. People have been able to do things that they were not able to do before, this includes the people with disabilities. People who are impaired don’t have as much opportunities compared to people who are well and able. The Internet has provided them avenues for communication, information gathering, social interaction, engaging in cultural activities and it provides them with employment opportunities. However, statistics have shown that the potential of the Internet to provide these certain opportunities is still not maximized because the people with disabilities are hindered by usability issues from using it to the fullest.

Stakeholders

The issue of usability is not only watched by institutions which are related to giving support to people with disabilities, most of the sectors of society are closely watching its progress. Institutions which are involved in governance, education, media, public services and even the business sectors are observers in the game.

Benefits

The benefits of improving accessibility of websites will not only benefit the people who have impairments but will affect the whole web community. Businesses, services, information campaigners, everyone will benefit.

Many people are calling for developing websites using a universal design approach. This is a way of developing web content which would accommodate the widest range of users. Some features of this said scheme are: provision of interoperability of applications; access for the disabled; localization and customization.

Recommendations for Improving Accessibility

Listed below are some of the key recommendations from the Web Content Accessibility Guidelines 1.0 which was developed by the Web Accessibility Initiative of W3C on how to improve the accessibility of the contents of a website.

1. Provide alternatives to audio-visual content

Not all people will be able to use different kinds of content. These people may be disabled or may have a lower version of Internet browsers. Movies, sound clips, animations and other contents should be translated into text alternatives so as to provide information to the broadest range of viewers.

2. Developers shouldn’t rely on color alone

Many people are impaired in color differentiation. Developers shouldn’t rely too much on the use of colors to relay information in the websites. Charts that are color-coded should be modified and the background and foreground colors of the websites should have enough contrast to enable people with color differentiation impairment to easily navigate the site.

3. Clarification of the use of natural language

Content developers usually mark up the changes in natural language in their websites. They should be able to identify the dominant language that is used in the site so as to avoid confusion.

4. Control of content changes that are time-sensitive

This issue particularly involves people who have visual or cognitive impairments and those who are not able to read texts that are moving quickly. Movement is seen as an over-all enhancer to the look of the site, but it may pose some problems to people with cognitive impairments.

5. Accessibility of user interfaces that are embedded

Objects that posses their own interfaces should be made accessible, and alternative solutions must be provided if this is not possible.

6. Provision of orientation and context information

The provision of information on how the objects are organized is important to provide people with guidance on how to access information.

There are other ways of improving a website’s over-all accessibility to make it more usable. Developers should take into consideration the different people who are going to view their websites and make them focal points in the designing process.

Advertisements

7 Surefire Ways To Increase Your Traffic Starting Yesterday

Internet. Business. Profit. To fully integrate all of these words into a successful merging you will need another word. Traffic. Every article you will find about making your site or company successful would always include the importance of generating traffic.

So, we all know that in the core of it all, traffic is the most essential thing to a successful internet based business company. Aside from ensuring that you have a great product to sell, and you have your company’s internal organization well taken core of, it would be time to get to the nitty-gritty of things, generating traffic.

If you already have a site and you want think that you’re not getting the traffic that you’re supposed to be getting, then its time to reconsider. If you are contending in these very competitive business, you should always be a step ahead of your competition, increasing your traffic flow should have been done starting yesterday.

Timing is essential, that’s an old adage known to everyone. But with generating traffic, you should always be on your toes and be a day ahead of everyone. Never think of today and tomorrow as a starting point for making your site traffic laden, it should always have been yesterday.

To help you out in generating more traffic for your site, here are some seven surefire ways to increase your traffic starting from yesterday.

1) Invest in good advertising with search engines

Google’s AdWords and Yahoo’s Overture provide great advertising schemes that are very truly popular and assures great traffic. Although with this surefire way to increase your traffic would cost some money. While some would shy away from spending money to increase traffic, it is imperative in this case to do so because AdWords and Overture is the top surefire way to increase your traffic.

You could see for yourself the success this search engine advertising methods have reaped rewards for so many companies. Lots of site feature these advertising system and many have signed on to reap the benefits. Do not be left behind. Every penny is worth it with using Google and Yahoo’s advertising.

2) Exchange or Trade Links with other sites

With exchanging links with other sites, both of you will benefit from the efforts both of you do to enhance your site’s traffic. When one site features another sites link, they could provide one another with the traffic one site generates. The efforts are doubly beneficial because it would seem like both of you are working to generate more traffic. The more links traded with more sites the more traffic could be expected.

3) Use Viral Marketing

Viral marketing allows you to spread the word about your company and product without any costs or if ever low costs only. This is a marketing method that can be quite sneaky; you can attach your company’s name, product or link to a certain media such as a funny video, entertaining game, an interesting article or a gossip or buzz. With this method, people get infected with the creativity and entertainment of the medium that they will pass it on to many people.

4) Search and use proper keywords or keyword phrases for your site’s content

Search engines look for certain keywords that they would show in their results page. In doing so, having the right keyword and keyword phrase is a high requirement in ranking in high in search engine results. You could write your own content or you could hire someone to do it for you.

5) Write Articles that can lead traffic to your site

Submit articles to sites that would contain the same subject that your site deals in. If you sell car parts write press releases and articles about cars and car parts. Attach your sites description and services at the end of the article as well as the link.

6) Join forums and form online communities

Capture a market and show your expertise and credibility. When you found a good foundation for your site, people will trust you and your site and will pass on to many people their trust. Traffic will certainly increase because they know that you can provide what they need.

7) Lastly, Offer newsletters.

If many people know what you are about and your existence is shared with many others, you will find a loyal traffic that can provide you with more traffic by recommendation. If you arouse the curiosity of your customers they would be pushed to help you with your traffic.

How to Make a Website More Appealing to International Users

More and more people around the world are using the Internet, and the numbers are increasing everyday. The Internet has become the primary source of information for many, and because of that, web sites have to constantly improve the content and image of their web pages in order to keep users interested in accessing their sites.

What are the measures that should be implemented by web designers in order to make their sites more appealing to users around the world? Here is a list of issues that can be encountered in web design and the necessary action to be considered:

1. Availability of basic features
First, the design of a web site should be compatible to any browser. It should be able to pass HTML and CSS validation tests. Second, web sites should be able to cater to disabled users. This won’t be a problem as long as designers adhere to web standards. Third, the process of navigating a web site should be simple enough for all users. No user likes to encounter a new site, and then he or she has to figure out how to navigate around it. Fourth, status bars should be available. It shows the destination of links as the cursor is being moved. The status of the current page is also displayed as it loads.

2. Appearance of the pages
There are four elements that make up the appearance of a web site. They are the fonts, color, graphics, and writing.

Fonts are not just a matter of personal preference of the user and the designer. The primary importance of font choice is that it affects how fast the users can read the information being presented. Arial fonts are usually recommended over the Times New Roman and the Verdana.

When applying color, it is important that there is enough contrast between background and foreground in order for the content to be readable. To achieve maximum contrast, black text against a white background should be used. Link colors should be established at standard settings.

When it comes to graphics, bear in mind that some pages get too overloaded because of the use of too much images. As much as possible, use graphics only to support the content being presented to users. A lot of people actually have the tendency to shut off the images when browsing for information.

Web designers should remember the distinction between writing for the web and writing for print. Web content should be short and straight to the point.

3. Site performance
There are three factors that determine the overall performance of a web site. These are speed, tables, and connections.

Since everyone is hankering for more bandwidth, the best that designers can do is to avoid the usage of design that will take up too much bandwidth, because not every user has access to fast Internet connections.

To avoid making the site appear like it takes forever to download, avoid loading putting a whole page inside a table. Instead, divide the page into several tables.

Web designers should not cloud a page with too much items for the simple reason that each item requires a separate browser for the whole page to be downloaded.

4. The occurrence of bugs
Of course, no one wants to have bugs in his system. To avoid this, body text should be set up with relative font sizes. One has to consider that there are users out there who have poor eyesight, and they would like to adjust the font sizes through their individual settings in order to read the text more clearly. The relative value recommended for this would be:

font-size: -1

or

font-size: 100%

In case of URLs, it should be simple and short, containing no punctuation or spaces. Users should be able to copy a URL and paste it into an email message without it being wrapped in multiple lines. To avoid dead links, redirects should be established, in order to avoid the breaking of bookmarks and links.

Web designers should make sure that navigation features will be present at all times, whatever the size of the window the user is using. Browser windows should be maximized when applying design, because not every user will be surfing the Internet in a maximized window.

Top 5 Challenges for Virtual Server Data Protection

Data Protection and the Drive to Virtualization

The benefits of server virtualization are compelling and are driving the transition to large-scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly
becoming a cornerstone of the modern data center.

However, the lure of virtual server deployments is having unintended consequences for data storage and data protection. The consolidation of physical servers and networking is resulting in a massively converged IT
infrastructure where already limited resources are being made even more scarce. Typical server consolidation ratios of 10 to 1 mean there are a fraction of the resources there once were for even routine IT management tasks like backup and recovery. In addition to fewer resources, massive data growth coupled with the expanding number of virtual machines is leading to an ever larger and more consolidated amount of data that must be managed and protected. The benefits of virtual servers in terms of cost savings, application flexibility and uptime are now driving customers to deploy more critical applications within a virtual machine context. These critical applications come with the most demanding SLAs for application uptime, granular recovery points, and rapid recovery times.

With this shift to virtualized data centers and round the clock operations, there is a need to rethink the traditional data protection techniques. Data protection and data recovery must have minimal front end impact and cannot exclusively rely on the legacy methods of streaming copying data from the production to the
backend. A modern, effective solution minimizes the load on production systems, reduces administrative effort, enhances data protection and recovery, eases the transition to a virtualized data center, and will enable cloud-based options when they are desired.

The Challenges of Virtual Server Data Protection

1 Exploding Backup Windows
High server consolidation and high virtual machine (VM) density concentrates data ownership to a small number of physical servers with most resources dedicated for production workloads. There are few resources, if any, left for traditional management tasks, such as backup, which moves data over a network during a fixed window. In this new world of consolidated and virtualized environments, storage and backup teams are being asked to protect large and growing data stores with a fraction of the compute, network and storage resources and to do so in less time.

As server resources continue to consolidate and virtual environments become more concentrated, the amount of data owned by virtual machines is skyrocketing. This massive growth in the amount of data to be owned, managed and protected by the virtual environment is compounding what is already an untenable situation when using a traditional streaming backup approach. Cases are emerging where a successful backup of multi-terabyte data stores using a traditional streamed backup approach is exceeding a 24-hour window,
far in excess of what the modern data center requires.

Every virtual machine is essentially a set of large files (VMDKs in a VMware context). These large files are stored on LUNs known as Datastores. Datastores can be configured on iSCSI or Fiber Channel block storage volumes or on NFS volumes. Traditional data protection techniques such as VMware’s vStorage API for Data Protection (VADP), or VMware Consolidated Backup (VCB) rely on an external agent to protect VMDK files associated with virtual servers. Typical steps are as follows:

  • Queisce virtual servers to get a consistent set of VM image files.
  • Use the VADP enabled agent to read the VM image files from the Datastores.
  • Copy the image files to a backup disk target.
  • Release the Virtual Servers for normal operations.

While VADP brings much efficiency to this process, it is still a streaming method that moves the image files from the datastore to backup disk for protection. For environments with ever shrinking backup windows, there is simply not enough time or bandwidth to move all the VM data. Even if the infrastructure is available to copy all this data, it places a tremendous burden on the datastores as the data is read.

2 Unprotected Virtual Machine Data
The ease of deploying new VMs leads to a virtual machine sprawl, making it tedious and time-consuming for administrators to keep track of new virtual machines and to ensure correct data protection and retention policies are applied to them. There is a major risk that important virtual machines may be created and never backed-up. Today, many administrators spend a significant part of their day tracking down new VMs and manually applying data protection policies. In the modern data center with hundreds or even thousands of virtual machines, this manual approach to ensuring VM protection policies is simply an unacceptable solution.

3 Lack of Application Integration
As more and more mission critical applications—like SQL, Exchange and Oracle—are virtualized, it is necessary to provide the same level of protection and recovery capabilities for these applications as they had in a purely physical server setting, while staying within the constraints imposed by a highly consolidated, virtualized environment. The modern data center now demands data protection solutions that deliver a level of application and virtualization platform awareness in order to provide concerted backup and restore capabilities that will ensure maximum uptime of these critical applications.

4 Inadequate Recovery Points
With high data growth and change rate, relying on last night’s backup for recovery is no longer sufficient. In addition, as organizations deploy more critical applications within a virtual server context, they are demanding Recovery Point Objectives (RPO) of hours. In other words, it is necessary to be able to recover to a few hours ago, not to last night’s backup, in order to minimize data loss and the impact to the organization as a result of any disruption. Creating frequent recovery points without impacting production activity is a huge challenge.

5 Lack of Restore Granularity
In order to further accelerate restores, organizations require an integrated approach to restoring data granularity at the volume, file or application object level. The ability to restore an individual e-mail or file from within a virtual machine datastore is critical for ensuring application uptime and for meeting availability and uptime SLAs. Traditional approaches which require remounting an entire virtual machine datastore (such as a VMDK) and searching through the contents to find a single user e-mail is simply too time-consuming and resource intensive to be a workable solution. Newer approaches are currently being introduced that enable file and object level restoration, however, they may require a second pass in order to generate that granular catalog. This unnecessarily adds additional processing time and consequent risk into the data protection process. A solution is needed that delivers granular restore options down to the file or object level and does so from  a single pass backup operation.

What you should know about a Computer Programming Career

Computer programming is one of the most important and exciting careers today.  It is also a field that offers plenty of job opportunities for graduates.  It’s one of the best fields of endeavor for people who love technology and are willing to try out new things.  If you’re considering pursuing a computer programming career, here are some things that you should know:

What is computer programming?
Computer programming is basically the process of writing codes to create a computer program.  A programming language is used to write this code, also known as the source code.  Computer programming is actually an umbrella term that encompasses all types of programming involving the use of computers.  The design and method utilized to write a source code will depend on the type of computer language used for the job.  Some of the most common computer languages include BASIC, COBOL, FORTRAN, C++, Java, Visual Basic, Python and PHP.

The job of computer programming also includes testing the source code, debugging it to check for flaws and weaknesses and maintaining it to ensure optimum performance.

Creating the code
The soul of computer programming is the creation of the source code, which can either be brand new or something created to modify or improve upon an already existing code.  The object of the source code is to build a program that will perform a particular series of tasks based on a specific set of commands.  This is called customization.

The end result of writing the source code is a computer program.  In software engineering, computer programming is an important initial phase.

What does a computer programmer do?
The main job of a computer programmer is to write the codes that serve as the foundation of software programs.  He is also tasked to test, troubleshoot, debug and maintain the program to ensure its quality and reliability.

Generally, the tasks that a computer programmer must perform are assigned by another person, usually the system analyst.  The computer programmer’s job is then to write the program, test it, modify it if necessary and ensure that it passes compatibility and quality standards.  If errors are found, it is the computer programmer’s job to ensure that they are corrected.

The job of a computer programmer usually requires hours upon hours spent in front of a computer to design and write a computer program.  Depending on the type of program being written, its purpose and the complexity of the commands required in order for a computer to execute the series of steps involved, writing a program can take several months to several years to complete.

The length of time it often takes for a program to be completed often necessitates having a single program broken down into a smaller series of steps.  These steps will then be assigned as tasks to a group of programmers who will work on them independently.  The final step is to put the end results and produce one coherent and useful computer program.

Computer programming is a very dynamic field and involves plenty of imagination and discipline.  Although there are set standards for the tasks involved, the profession itself does not require certification tests from government agencies.  There are also no state and federal licenses to obtain.

How much does a computer programming job pay?
That will depend on the experience and level of expertise of the programmer.  Most entry-level programmers earn a minimum of about $33,000 a year while mid-level programmers earn approximately $50,000 a year.  For senior level programmers and those who have several years of experience behind them, their typical take-home pay averages at about $65,000 a year.  Consultants, some managers and those who have advanced well in their computer programming career are often paid more.

Computer Programming: Working In a Team

Computer programming requires some very intricate work.  This is the type of work that thrives on details and people who work in this field understand that the absence of even the minute elements can spell a huge difference in the overall result.  If a programmer fails to correct this problem, it can lead to errors down the line.  As a result, bugs will appear in the system and errors will emerge later on.  Programming is also taxing work, requiring hours upon hours of writing, testing and debugging.  This is why computer programming thrives on team work.  Without team work, a single computer program can take decades to complete.

Although one programmer has the necessary skills and knowledge to work competently on a problem or even create a program, he or she can only do so much.  Creating the source code for an operating system, for example, will require thousands of man-hours from a single programmer and most probably, he or she will only be halfway through.  There just isn’t enough time for one or even two programmers to work effectively to produce a usable program.

Team profile
So what constitutes a team in computer programming?  A team is usually headed by the team leader, such as a systems analyst or senior programmer.  The senior programmer is usually a person who has had years of training and experience behind him.  His task is to supervise the team, lead in brainstorming and problem solving sessions, delegate assignments, check the correctness of the coding, dispense advice and recommendations and lead in debugging and software maintenance.

The team leader is the one who holds the team together and ensures there is a well-coordinated effort that will lead to the desired results.  All team members report to him and depending on the size of the project, the team leader may have an assistant or another leader to work with.

The team is usually composed of junior or entry-level programmers, particularly those who may have the qualifications but not the number of years’ worth of experience yet.  Depending on what the team leader wants, a junior programmer may be tasked to work on his own on much simpler assignments or he may be assigned as part of a group.  This group may consist of other entry-level programmers or more experienced professionals.

The members of a team are chosen based on their expertise.  At the beginning of a project, the team leader, along with other more senior programmers, will try to break down the problem into components, which will consist of tasks.  Tasks can vary according to complexity and function and will be assigned to a team who has the skills to complete them.

The number of programmers in a team can be as small as 3 or it can number in the dozens or even hundreds.  Again, it all depends on the size of the project and the availability of resources.

Team work is a necessary component of computer programming.  It helps pool a group’s resources and form a coordinated effort in order to produce a particular program or software.  In some cases, such as in exceptionally huge projects, some teams may work alternately or in shifts, which makes it necessary that a team is capable of sustaining coordination among them.

Team work drives computer programming.  A vast majority of the computer programs and software we enjoy today – from the operating systems to the video games to the technology that run our phones – were produced not by a single programmer but by a team.  Whatever it is that has made using computers and other forms of technology that much easier and more convenient is something we owe to a team of well-trained and highly skilled computer programmers.

The challenges of Web 2.0 applications

Rich Internet applications allow for dynamic,
asynchronous data transfer, using multiple protocols and
a variety of servers. They gather data from distributed,
heterogeneous sources, including cloud-based and
external data storage options. Thick clients with widgets
and client-side functionality often have server-side
components, which may need additional processing
before the server sends the data back to the client.
Developers who build these widgets—often adding
them from available toolkits—do it on their development
machines and don’t realize that once separated across
the network, the server component may cause latency
and affect the overall system performance.

New technologies such as Ajax enable prefetching,
where every new letter that a user enters into a
search engine suggests a new set of results that are
dynamically delivered from the server. All this activity
generates a lot of network traffic and can significantly
impact performance. Network latency and
bandwidth constraints can also create performance
bottlenecks. To accurately predict the performance
of an application, it is necessary to test individual
components and services, but equally critical are
server monitoring and end-to-end performance testing,
along with accurate WAN emulation.

Testing Web 2.0 applications presents its own set of
challenges.1 The complexity of new technologies, the
lack of commonly recognized and accepted standards,
and the sheer multitude of emerging frameworks
and toolkits make it difficult for companies to build
Web 2.0 testing strategies and select appropriate
automation solutions. Traditional testing tools focus on
protocol-level verification, offering no framework-level
support or ability to accurately recognize objects in
these new, rich clients, making it virtually impossible
to effectively validate the performance of Web 2.0
applications. Script creation, which has always been a
lengthy, time-consuming process that requires domain
and application expertise, becomes even more
complex in Web 2.0 applications.

Impact of new development tools on IT professionals

The IT field has gone through some incredibly fast changes. Advances in technology have been fast to come and in great numbers. These advances have had an impact on developers, which some in the IT field see as positive and others see as negative. Advances in technology and the tools associated with the development process have caused a shift in the way tools used by developers are used and in the methodologies used by developers to manage their work.

One area that has seen change is that of Programming Methodologies. Rapid Application Development, a Programming Methodology in the 1990’s, is today a category of software development tool used to make it easier for developers to respond to today’s increased demand for software solutions.

In the mid 1980, structure was widely present in the work done by programmers. Programming languages such as Cobol required that programmers follow a given order when programming. A was followed by B which preceded C. This logical structure gave programmers a reason to think about their code before they coded it. Programmers used flow charts to create a visual representation of a program before writing any code.

This paper will cover the changes in Programming Methodologies, Development Concepts, and Software, and the impact experienced by IT professionals.

Programming Methodologies

Technological advances resulted in changes in the way IT professionals addressed the demands of the client. Programming methodologies were the first to show these changes. Traditional Methodologies were based on a structured systematic approach to developing systems. This forced developers to “sign-off” after the completion of each specification before development could proceed to the next step. The requirements and design were frozen and the system was coded, tested, and implemented. With such conventional methods, there was a long delay before the customer saw any results and the development process could take so long that the customer’s business could fundamentally change before the system was even ready for use.

This was the case until the early 1990’s when technology guru James Martin introduced a new development methodology, which he called “Rapid Application Development” (Wikipedia contributors, 2010). Rapid Application Development was born out of a need to create applications faster than other methods of the time would allow. Rapid Application Development was a “merger of various structured techniques, especially data-driven Information Engineering, with prototyping techniques to accelerate software systems development” (Wikipedia contributors, 2010).

Reaction against heavyweight methods characterized as heavily regulated, regimented, micro managed, waterfall methods of development (Wikipedia contributors, 2010), lead to what we known today as Agile, a programming methodology that focuses on making the development process even faster than Rapid Application Development did. Responding to increased demand for programming solutions, IT professionals have looked for ways to cut development time resulting in a progressive change in programming methodologies.

Development Concepts

Client/Server technology was in its infancy in 1992 and already there were signs of the impact advances in this technology would have on networks. “The proliferation of networked applications will come with a large burden for network managers.” (Ewald, Roy, 1992) Furthermore, client/server software tools were not available in large numbers and that made “application development cycles longer than necessary.” (Ewald, Roy, 1992)

To make things worse, “software developed with existing tools are not readily reusable.” (Ewald, Roy, 1992)

Reusability gave birth to Object-orientation. Object-Orientation (OO) gave developers the ability to improve deliverables, notations, techniques, and tools used in application development. “Its goal was to encapsulate design decisions with an anthropomorphic design mind-set and objects, classes, inheritance, polymorphism, and dynamic binding as its mechanisms.” (Cockburn, Alistair, 1993) In short, it was meant to provide developers with re-usable objects and therefore cut development time.

Cockburn and Alistair (1993) wrote that “Object-oriented (OO) development is characterized by: the encapsulation of process with data in both the application structure and the development methodology; anthropomorphic design, in which objects in the application are assigned “responsibilities” to carry out; modeling the problem domain throughout development; emphasis on design and code reuse with extensibility; incremental and iterative development.”

Software

Coupe and Onodu (1996) wrote an article in the Journal of Information Technology in which they said, “There is a need to develop new software and upgrade existing systems to meet competitive challenges, to plan effectively, and to manage the day-to-day running of organizations. The pressure is increasing for developers to work more efficiently and produce better quality systems more quickly than before” (Coupe, Onodu, 1996). Using these new technologies to “reduce system development time within IT departments” (Coupe, Onodu, 1996) made it possible for “these systems to be put into operation sooner” (Coupe, Onodu, 1996).

The systems Coupe and Onodu were referring to where Computer aided software engineering (CASE) tools, which at the time were seen as having a positive effect on developer productivity. A survey of developers in UK organizations confirmed the notion finding that CASE tools “improved the reliability and accuracy of applications software” (Coupe, Onodu, 1996). Application development as a tool required continuous fine-tuning to keep it operating at peak efficiency.

These advances in technology lead to other advances that made it possible for it professionals to benefit from them. Such is the case for Web Services, an advance in software development, which prompted the major manufacturers of development software to create “tools that will make it easy for developers” (Dyck, 2001) to do their job.

Application development advances include the creation of tools that expand on the concept of an IDE for developers. Microsoft introduced a research project called Code Canvas, which, is likened to a roadmap of code helping developers understand complexities and changes in code (NetworkWorld.com, 2009). Although still a research project, it shows the moves that technology companies such as Microsoft are making to create an even more visual development environment for developers. This is as an attempt to bridge the gap between developers and designers.

Conclusion

Advances in technology and the tools associated with the development process have placed increased demands on developers to be more efficient and produce better quality systems more quickly than before. This increased demand has given way to changes in programming methodologies as well as programming tools and has introduced new programming concepts that have given developers a way to create objects that can be re-used. Client/Server technologies, Object-Orientation, and Web Services are just a sample of programming concepts that are the direct result of a need for more productive developers.

The impact these advances have had on it professionals has always been a hard issue to address. This reflects, in part, the difficulty in defining and measuring software quality (Ewald, Roy, 1992). Training developers on the use of new technologies can affect how researchers and other IT professionals perceive the impact of these changes. The impact can be positive when training, hardware and software upgrades, are present. On the other hand, a lack of hardware and software upgrades as well as training would give reason to labeling any advance in technology as negative.

It is not within the content of this document to define the impact as negative or positive. Rather, it is up to the readers to come to their own conclusions based on the information made available to you. One conclusion to note is that organizations, educational institutions, and IT professionals tasked with developing solutions need to be aware of how changes in technology affect them. Advances in technology are often times followed by other advances resulting out of a need to improve the tools used by developers to do their jobs. It is not enough to train developers on emerging technologies, it is necessary to provide them with the tools they need to use the emerging technologies.

References

Agile software development. (2010, October 20). In Wikipedia, The Free Encyclopedia. Retrieved from http://en.wikipedia.org/w/index.php?title=Agile_software_development&oldid=391820065

Cockburn, A. A. R. (1993). The impact of object-orientation on application development. IBM Systems Journal, 32(3), 420. Retrieved from http://proquest.umi.com.library.capella.edu/pqdweb?did=547801&Fmt=7&clientId=62763&RQT=309&VName=PQD

Coupe, R. T., & Onodu, N. M. (1996). An empirical evaluation of the impact of CASE on developer productivity and software quality. Journal of Information Technology, 11(2), 173. Retrieved from http://proquest.umi.com.library.capella.edu/pqdweb?did=667612851&Fmt=7&clientId=62763&RQT=309&VName=PQD

Ewald, A., & Roy, M. (1992). The evolution of the Client/Server revolution. Network World, 9(46), 75. Retrieved from http://proquest.umi.com.library.capella.edu/pqdweb?did=677568&Fmt=7&clientId=62763&RQT=309&VName=PQD

Paul Krill, (2009). Microsoft, IBM highlight software development advances. Info World. Retrieved from http://www.networkworld.com/news/2009/070909-microsoft-ibm-highlight-software-development.html

Rapid application development. (2010, October 21). In Wikipedia, The Free Encyclopedia. Retrieved from http://en.wikipedia.org/w/index.php?title=Rapid_application_development&oldid=392053265

Timothy Dyck, (2001). Tools Advance Web Services. EWeek.com. Retrieved from http://www.eweek.com/c/a/Application-Development/Tools-Advance-Web-Services/