Wednesday, September 16, 2009

ajax using and webmethod usage






ServicePath="~/Service/areasService.asmx" ServiceMethod="FindAreas" />





public class areasService : System.Web.Services.WebService
{


[WebMethod]
public String[] FindAreas(String prefixText, int count)
{
//return all records whose Title starts with the prefix input string
PhysicianDAO phyDao = new PhysicianDAO();
List titleArList = new List();
//string prefixText1 = Trim(prefixText);
Area[] areasList = phyDao.GetAreaList(prefixText);
foreach (Area area in areasList)
{
String strTemp = Convert.ToString(area.AreaName) + "," + Convert.ToString(area.State.StateCode);
titleArList.Add(strTemp);
}
return titleArList.ToArray();
}
}

Friday, July 17, 2009

Access specifires in dotnet


1. PUBLICAs the name specifies, it can be accessed from anywhere. If a member of a class is defined as public then it can be accessed anywhere in the class as well as outside the class. This means that objects can access and modify public fields, properties, methods.
2. PRIVATEAs the name suggests, it can't be accessed outside the class. Its the private property of the class and can be accessed only by the members of the class.
3. FRIEND/INTERNAL Friend & Internal mean the same. Friend is used in VB.NET. Internal is used in C#. Friends can be accessed by all classes within an assembly but not from outside the assembly.
4. PROTECTEDProtected variables can be used within the class as well as the classes that inherites this class.
5. PROTECTED FRIEND/PROTECTED INTERNALThe Protected Friend can be accessed by Members of the Assembly or the inheriting class, and ofcourse, within the class itself.

6. DEFAULTA Default property is a single property of a class that can be set as the default. This allows developers that use your class to work more easily with your default property because they do not need to make a direct reference to the property. Default properties cannot be initialized as Shared/Static or Private and all must be accepted at least on argument or parameter. Default properties do not promote good code readability, so use this option sparingly.

NormalForms and ddl,dcl,dml,tcl commands

DDL : CREATE, ALTER, DROP
DML : INSERT, UPDATE, DELETE
DCL : GRANT, REVOKE
TCL : COMMIT, SAVE POINT, ROLLBACK
DQL : SELECT
-- Normalization is a process of efficiently organizing data in the database.
-- goles of NF: There are two goals of the normalization process: eliminating redundant data (for example, storing the same data in more than one table) and ensuring data dependencies make sense (only storing related data in a table).
-- 1stNF : The first normal form (or 1NF) requires that the values in each column of a table are atomic. By atomic we mean that there are no sets of values within a column.
1NF --every row should be in atomic form....there should not be more than one value for an attribute in a row...
2NF --there should be no partial dependency. it means that there should be no dependency between part of a key to non-key. This does not happen if you choose a single attribute as your primary key.
3NF -- there should be no transitive dependecy. we can also say as there should be no Non-Key to Non-key dependency...
BCNF -- every determinant should be a candidate key.

Thursday, July 16, 2009

to change aspx extension to .gt

these changes enough to change ".aspx" extension to ".gt" in our application changes in web.config file add these

add entry in

//add extension=".gt" type="System.Web.Compilation.PageBuildProvider" //

in
buildProviders

then add

//add path="*.gt" verb="*" type="System.Web.UI.PageHandlerFactory" validate="true" //

entry in
httpHandlers


then change all .aspx extensions to .gt in our application. now .gt will work as .aspx

Wednesday, July 15, 2009

dynamically adding columns and values for a grid

Dynamically adding columns and values for a grid
private void GetSamples()
{
Hashtable hashList = new Hashtable();
Hashtable rptDispHashList = new Hashtable();
IList reportRsltsList = _sampleUI.GetTestReportDetails(_caseId);
string ReferralNumber = "REF0710014";
DataTable dt = BuildDataTableHeader();
int fixedColCount=dt.Columns.Count;
dt = BuildDataTableHeaderForSampleParameter(ReferralNumber, dt);---------method used to add dynamic columns for a grid
dt = BuildDataTableHeaderForResultParameter(ReferralNumber, dt);
if(reportRsltsList.Count != 0)
{
foreach(IList reportList in reportRsltsList)
{
DataRow dr = dt.NewRow();
ReportResult rptRsult = (ReportResult)reportList[0];
Test test = (Test)reportList[1];
Patient patient = (Patient)reportList[2];
string billingType = reportList[3].ToString();
//dt = BuildDataTableHeaderForSampleParameter(rptRsult.AccessionNo);
dr[5] = rptRsult.ReferralNo;
dr[6] = test.TestName;
dr[7] = patient.Name;
dr[2] = rptRsult.ReportResultCodes.ReportResultCode;
dr[1] = rptRsult.SampleStatus;
dr[3] = rptRsult.IsPrimary;
dr[8] = rptRsult.Id;
dr[0] = rptRsult.AccessionNo;
dr[9] = rptRsult.ReportDesc;
dr[10] = rptRsult.ReportResultCodes.ReportResultID;
dr[11] = rptRsult.ReportDispatch.RptDispatchId;
dr[12] = rptRsult.ReportDispatch.RptDispatchCode.RptDispatchId;
dr[13] = rptRsult.SampleTypeCode;
dr[14] = test.TestId;
dr[15] = billingType;
if (rptRsult.ReportDispatch.CourierRefferenceNo != "")
dr[19] = rptRsult.ReportDispatch.CourierRefferenceNo;
dr[20] = rptRsult.ReportDispatch.rptDispatchDate;
_SampleTypesHash = (Hashtable)Session[SessionGlobals.SAMPLE_TYPES_LIST];
if (_SampleTypesHash != null)
{
if (_SampleTypesHash.ContainsKey(rptRsult.SampleTypeCode.ToString()))
dr[16] = _SampleTypesHash[rptRsult.SampleTypeCode.ToString()].ToString();
}
_TestsListHash = (Hashtable)Session[SessionGlobals.CASE_TEST_LIST];
if (_TestsListHash != null)
{
if (_TestsListHash.ContainsKey(test.TestId.ToString()))
dr[17] = _TestsListHash[test.TestId.ToString()].ToString();
}
dr[4] = reportList[4].ToString();
dr[18] = Convert.ToInt32(reportList[5].ToString());
//to get all sample parameter values
int i = fixedColCount;
Parameter[] sampleParValues = _lmsUI.GetSampleParamValues(rptRsult.AccessionNo);
foreach (Parameter parSamValues in sampleParValues)----------Adding dynamic column values in a grid
{
dr[i] = parSamValues.ParameterValue;
i++;
}
Parameter[] resultParValues = _lmsUI.GetResultParamValues(rptRsult.AccessionNo);
foreach (Parameter parResultValues in resultParValues)
{
dr[i] = parResultValues.ParameterValue;
i++;
}
//end
dt.Rows.Add(dr);
if(!hashList.Contains(rptRsult.ReferralNo))
{
IList ilis = new ArrayList();
ilis.Add(test.TestId);
ilis.Add(rptRsult.SampleStatus);
ilis.Add(rptRsult.ReportResultCodes.ReportResultCode);
IList ilisT = new ArrayList();
ilisT.Add(ilis);
hashList.Add(rptRsult.ReferralNo, ilisT);
}
else
{
IList ilis = new ArrayList();
IList previousList = (IList)hashList[rptRsult.ReferralNo];
ilis.Add(test.TestId);
ilis.Add(rptRsult.SampleStatus);
ilis.Add(rptRsult.ReportResultCodes.ReportResultCode);
previousList.Add(ilis);
}
if(rptRsult.AccessionNo != "")
{
rptDispHashList.Add(rptRsult.AccessionNo, rptRsult.ReportDispatch);
}
}
uxSaveResultBtn.Enabled = true;
uxReportDelBtn.Enabled = false;
}
else
{
DataRow dr = dt.NewRow();
dt.Rows.Add(dr);
uxSaveResultBtn.Enabled = false;
uxReportDelBtn.Enabled = false;
}
uxSampleGrid.DataSource = dt;
uxSampleGrid.DataBind();
ViewState["HashList"] = hashList;
ViewState["ReportDispatchHashList"] = rptDispHashList;
}

method for adding dynamic columns

private DataTable BuildDataTableHeaderForSampleParameter(string RefNo, DataTable dtable)
{
DataTable dt = dtable;
Parameter[] sampleParNames = _lmsUI.GetSampleParamNames(RefNo);
foreach (Parameter p in sampleParNames)
{
string parName=p.ParameterName;
DataColumn datCol = new DataColumn(parName, System.Type.GetType("System.String"));
dt.Columns.Add(datCol);
}
return dt;
}

using "hashtable" and getting values in .cs file

hashtable at "dao"
public Hashtable GetEmpGroups(string emploginId)
{
Hashtable hashEmpGroups = new Hashtable();
SqlConnection conn = new SqlConnection(_connStr);
StringBuilder gname = new StringBuilder();
StringBuilder id = new StringBuilder();
string sqlQury = "SELECT LG.GROUP_NAME,LEG.GROUP_ID FROM LMS_EMP_GROUP LEG LEFT JOIN LMS_GROUP LG ON LEG.GROUP_ID=LG.GROUP_ID WHERE EMP_LOGIN_ID='" + emploginId + "'";
try
{
conn.Open();
SqlDataReader dr = SqlHelper.ExecuteReader(_connStr, CommandType.Text, sqlQury);
while (dr.Read())
{
if (gname.ToString() == "")
{
gname.Append(dr[0].ToString());
id.Append(dr[1].ToString());
}
else
{
gname.Append("," + dr[0].ToString());
id.Append("," + dr[1].ToString());
}
}
hashEmpGroups.Add("Groups", gname);
hashEmpGroups.Add("GroupIds", id);
dr.Close();
dr.Dispose();
return hashEmpGroups;
}
catch (Exception ex)
{
throw new ApplicationException(ex.Message);
}
finally
{
conn.Close();
conn.Dispose();
}
}

getting hashtable values at .cs file
Hashtable groupnames=GetEmpGroups(emp.EmpLoginId);
dr[3] = groupnames["Groups"].ToString();

dynamically creating controls based on "db" values

int testId = Convert.ToInt32(uxTestId.Value);
string accessionNo = uxHidAccessNo.Value;
IList ListData = new ArrayList();
IList valueData = new ArrayList();
ListData = _lmsUI.getDataReqAtSamCollectionDetailsListForResultPopup(testId);
valueData = _lmsUI.getDataReqAtSampleCollectionValueListRorResultPopup(accessionNo);
int i = 0;
foreach (DataRequiredAtSampleCollection samData in ListData)
{
string[] sd = null;
Label l1 = new Label();
l1.ID = "uxLbl1" + i;
l1.Text = samData.ParameterName;
Label l2 = new Label();
l2.Text = samData.ParameterId.ToString();
l2.Visible = false;
TextBox t1 = new TextBox();
t1.ID = "uxValueTxt" + i;
if (samData.DataType == "LargeText")
{
t1.TextMode = TextBoxMode.MultiLine;
t1.Width = 200;
t1.Height = 50;
t1.Font.Size = 8;
}
if (samData.DataType == "Text" samData.DataType == "Number" samData.DataType == "LargeText")
{
foreach (DataRequiredAtSampleCollection value in valueData)
{
if (samData.ParameterId == value.ParameterId)
{
t1.Text = value.Value;
}
}
}
if (samData.DataType == "Number")
{
t1.Attributes.Add("OnKeyPress", "showValues();");
}
DropDownList drop = new DropDownList();
drop.Width = 155;
drop.ID = "uxValueDdl" + i;
if (samData.Value != "")
{
sd = samData.Value.Split(',');
int k = 0;
foreach (string s in sd)
{
if (s != "")
{
drop.Items.Insert(k, s);
k++;
}
}
drop.Items.Insert(0, "");
foreach (DataRequiredAtSampleCollection value in valueData)
{
if (samData.ParameterId == value.ParameterId)
{
drop.SelectedItem.Text = value.Value.Trim();
}
}
}
TableCell tcell1 = new TableCell();
tcell1.Controls.Add(l1);
TableCell tcell2 = new TableCell();
tcell2.Controls.Add(l2);
TableCell tcell3 = new TableCell();
if (samData.DataType != "List")
{
tcell3.Controls.Add(t1);
}
else
{
tcell3.Controls.Add(drop);
}
TableRow tr1 = new TableRow();
tr1.Cells.Add(tcell1);
tr1.Cells.Add(tcell2);
tr1.Cells.Add(tcell3);
uxDynamicTb.Controls.Add(tr1);
uxSaveBtn.Visible = true;
uxCancelBtn.Visible = true;
i++;
}

dynamically created controls finding method

function call
===========
IterateControls(this);

method implementaion
==================
private void IterateControls(Control parent)
{
foreach (Control child in parent.Controls)
{
if (child.GetType().ToString().Equals("System.Web.UI.WebControls.TextBox") && child.ID.IndexOf("uxValueTxt") == 0)
{
TextBox textbox = (TextBox)child;
_textString += textbox.Text + ",";
}
else
if (child.GetType().ToString().Equals("System.Web.UI.WebControls.DropDownList") && child.ID.IndexOf("uxValueDdl") == 0)
{
DropDownList dropdown = (DropDownList)child;
if (dropdown.Items.Count != 0)
{
_textString += dropdown.SelectedItem.Text + ",";
}
}
if (child.Controls.Count > 0)
{
IterateControls(child);
}
}
uxHidTextString.Value = _textString;
}

Monday, July 13, 2009

All types of joins

There are six type of join in SQL 2000
1) INNER JOIN
2) OUTER JOIN
3) CROSS JOIN
4) EQUI JOIN
5) NATURAL JOIN
6) SELF JOIN
1) INNER JOIN :- PRODUCESS THE RESULT SET OF MATCHING ROWS
ONLY FROM THE SPECIFIED TABLES.
EXAMPLE---
SELECT COLUMN_LIST FROM 1ST_TABLE_NAME JOIN 2ND_TABLE_NAME
ON
1ST_TABLE_NAME.MATCING_COLUMN=2ND_TABLE_NAME.MATCING_COLUMN
2) OUTER JOIN :- DISPLAY ALL THE ROWS FROM THE FIRST TABLE
AND MATCHING ROWS FROM THE SECOND TABLE.
EXAMPLE---
SELECT COLUMN_LIST FROM 1ST_TABLE_NAME OUTER JOIN
2ND_TABLE_NAME
ON
1ST_TABLE_NAME.MATCING_COLUMN=2ND_TABLE_NAME.MATCING_COLUMN

THERE ARE THREE TYPES OF OUTER JOIN:
A)LEFT OUTER JOIN.
B)RIGHT OUTER JOIN.
C)FULL OUTER JOIN
A)LFET OUTER JOIN :- DISPLAYS ALL THE ROWS FROM THE FIRST
TABLE AND MATCHING ROWS FROM THE
SECOND TABLE.
EXAMPLE---
SELECT COLUMN_LIST FROM 1ST_TABLE_NAME LEFT OUTER JOIN
2ND_TABLE_NAME ON
1ST_TABLE_NAME.MATCING_COLUMN=2ND_TABLE_NAME.MATCING_COLUMN
A)RIGHT OUTER JOIN :- DISPLAYS ALL THE ROWS FROM THE
SECOND TABLE AND MATCHING ROWS FROM
THE FIRST TABLE.
EXAMPLE---
SELECT COLUMN_LIST FROM 1ST_TABLE_NAME RIGHT OUTER JOIN
2ND_TABLE_NAME ON
1ST_TABLE_NAME.MATCING_COLUMN=2ND_TABLE_NAME.MATCING_COLUMN
A)FULL OUTER JOIN :- DISPLAYS ALL MATCHING AND NONMATCHING
ROWS OF BOTH THE TABLES.
EXAMPLE---
SELECT COLUMN_LIST FROM 1ST_TABLE_NAME FULL OUTER JOIN
2ND_TABLE_NAME ON
1ST_TABLE_NAME.MATCING_COLUMN=2ND_TABLE_NAME.MATCING_COLUMN
3)CROSS JOIN :- IN THIS TYPE OF JOIN, EACH ROWS FROM THE
JOIN WITH EACH ROWS FROM THE SECOND TABLE
WITHOUT ANY CONDTION.
ALSO CALLED AS CARTESIAN PRODUCT.
EXAMPLE---
SELECT COLUMN_LIST FROM 1ST_TABLE_NAME CROSS JOIN
2ND_TABLE_NAME
4) EQUI JOIN :- DISPLAYS ALL THE MATHCING ROWS FROM JOINED
TABLE. AND ALSO DISPLAYS REDUNDANT VALUES.
IN THIS WE USE * SIGN TO JOIN THE TABLE.
EXAMPLE---
SELECT * FROM 1ST_TABLE_NAME JOIN 2ND_TABLE_NAME
ON
1ST_TABLE_NAME.MATCING_COLUMN=2ND_TABLE_NAME.MATCING_COLUMN

5)NATURAL JOIN :- DISPLAYS ALL THE MATHCING ROWS FROM
JOINED TABLE.IT RESTRICT
REDUNDANT VALUES.
6)SELF JOIN :- IN THIS TABLE JOIN WITH ITSELF WITH
DIFFERENT ALIAS NAME.
ASSUME DEPARTMENT IS A TABLE:
SELECT A.DEP_NAME,B.MANAGER_ID(COLUMN LIST) FROM DEPARTMENT
A JOIN
DEPARTMENT B
ON A.MANAGER_ID=B.MANAGER_ID

Monday, June 29, 2009

howto-restore-your-old-emails

If the user suddenly lost his/her mails, check whether user's Z drive is accessible or not, if Z drive is not accessible check the network cable connection and restart the system. Still Z drive is not available, consult the System Administrator immediately.
If Z drive is available then only do the following to change the outlook express's store folder location to Z drive
1.start Outlook Express(OE), goto tools->options->maintenance->store folder, please write down the path given there.
For winNT the path will look like, C:\WINNT\Profiles\ \Application Data\Identities\ \Microsoft\Outlook ExpressFor Windows XP the will look like, C:\Documents and Settings\ \Local Settings\Application Data\Identities\ \Microsoft\Outlook Express
We need to recover messages form the above store folder location if new messages were downloaded there.
2.Close OE
3.open registry edit by startMenu->run->regedit->ok.
4.On the left panel, goto My Computer->HKEY_CURRENT_USER->Identities->" Identity no. shown in above store folder " ->Software -> Microsoft-> Outlook Express -> 5.0
5.Choose Store Root on the right panel, and double cilck on it.
6.Delete the value data if it is pointing to "C drive" or "%%user name%%...".
7.Enter new store folder value as "z:\oemails" for windows XP or "z:\\oemails" for windows NT.
8.To check the Address book location, in the left panel of registry edit, goto My Computer->HKEY_CURRENT_USER->Software->Microsoft->WAB->WAB4->Wab file name. On the right panel choose Default..check the value data..if it is pointing to "C drive" or "%%User name%% ", delete it and enter value as "z:\\oeaddressbook\.wab" for windows NT or "z:\oeaddressbook\.wab" for windows XP
9.Close Registry Editor
If any mails were downloaded in the old store folder, they need to be recovered. Follow the steps1.start OE2.check the store folder by going to tools->options->maintenance->store folder. It should be pointing to your Z drive. If it is not pointing to Z drive then do the above given steps to change the store folder location to Z drive.3.come back to default Outlook Express interface, then go to File->Import->messages->Outlook Express 6 -> Import mail from an OE6 store directory-> "OK" -> "browse"-> "browse to the previously taken store folder path"->next->choose the desired email folders that needs to imported->next->Finish.Note: if you can't see the required folders in the browse list, go to "windows explorer->view->Options and enable "Show all files""Note:If you get any error messages while importing mails from the old store folder, please consult the System Administrator

out and ref

The out and the ref parameters are used to return values in the same variables, that you pass an an argument of a method. These both parameters are very useful when your method needs to return more than one values.
In this article, I will explain how do you use these parameters in your C# applications.
The out Parameter
The out parameter can be used to return the values in the same variable passed as a parameter of the method. Any changes made to the parameter will be reflected in the variable.
public class mathClass{ public static int TestOut(out int iVal1, out int iVal2) { iVal1 = 10; iVal2 = 20; return 0; }
public static void Main(){ int i, j; // variable need not be initialized Console.WriteLine(TestOut(out i, out j)); Console.WriteLine(i); Console.WriteLine(j);}}
The ref parameter
The ref keyword on a method parameter causes a method to refer to the same variable that was passed as an input parameter for the same method. If you do any changes to the variable, they will be reflected in the variable.
You can even use ref for more than one method parameters.
namespace TestRefP
{
using System;
public class myClass
{

public static void RefTest(ref int iVal1 )
{
iVal1 += 2;

}
public static void Main()
{
int i; // variable need to be initialized
i = 3;

RefTest(ref i );
Console.WriteLine(i);

}
}
}

which folder creates when applicaton created

1.Which folders are created when the application created in dotnet?
Sol.
For windows application
1.application name folder
-->application name folder
bin, obj, propeties, Form1.cs, Form1.Designer.cs, program.cs.
bin(debug(.exe))
obj(Debug(TempPE))
properties(AssemblyInfo.cs, Resource.Designer.cs, Resources.resx, Settings.Designer.cs)
--> .sln
-->.suo

For WebApplication

-->WebSite1
-->App_Data
-->Default.aspx
-->Default.aspx.cs
-->Web.config

Monday, March 9, 2009

head mounted projector





Today, the pace of surgical innovations has increased
dramatically, as have the societal demands for safe and
effective practices. The mechanisms for training and retraining
suffer from inflexible timing, extended time
commitments, and limited content. Video instruction has
long been available to help surgeons learn new
procedures, but it is generally viewed as marginally
effective at best for a number of reasons, such as the fixed
point of view that is integral to the narration, lack of depth
perception and interactivity, and missing information [1].
In short, the experience of watching a video is not
sufficiently close to being there and seeing the procedure.
A paradigm that uses immersive Virtual Reality could
be a more effective approach to allow surgeons to witness
and explore a past surgical procedure as if they were
there. We are indeed pursuing such an immersive
paradigm together with our medical collaborators at the
UNC-Chapel Hill School of Medicine (Dr. Bruce Cairns
and Dr. Anthony Meyer), and our computer graphics
collaborators at Brown University (Andy van Dam et al).
This paradigm demands methods to record the procedure
and to reconstruct the original time-varying events to
create an immersive 3D virtual environment of the real
scene. A more complete solution should also allow
relevant instructions and information, such as vocal
narration, 3D annotations and illustrations, to be added by
the original surgeon or other instructors.
Besides the recording and the reconstruction, providing
an effective way to display a 3D virtual environment to
the user is also a major challenge. In this paper, we
introduce a hybrid approach to address this challenge.
During a typical use of the training system, the trainee
would usually stand beside the patient paying close
attention to the surgery. She might even stand in the
position of a surgeon and observe the procedure from his
(a) (b)
Figure 1. Different views of a surgical operation.
Figure 2. A user using our prototype system based
on our hybrid display approach that combines a
HMD and a projector-based display.
point of view. At the same time, the trainee is also
required to be aware of the surrounding events that could
affect the surgeons’ actions. Such surrounding events
include the actions of other surgeons and technicians,
changes in monitoring and life-support devices, and
overall awareness of the patient’s dynamic condition.
Figure 1(a) shows a close-up view of a real surgical
operation in progress, and Figure 1(b) shows a snapshot
of the many events happening in the operation room.
The visual needs of the trainee can be divided into two
main parts. The first part requires high-quality stereo view
of the objects and events that the trainee is paying direct
attention to, such as the main surgical procedure. Highquality
and high-resolution views are needed to discern
the great intricacy of the surgery, and stereovision is
needed for better spatial understanding. The second part
of a trainee’s visual needs is the peripheral view of her
surroundings. This is needed by the trainee to maintain
visual awareness of the surrounding events. Our medical
collaborators, and others in the field, feel that visual
awareness of the entire patient and the surroundings is a
critical component of surgical training. In particular, with
trauma surgery there is typically a lot of relevant activity
in the operating room. It has been found that in the human
visual system, resolution in the periphery is less dense
than in the fovea [2], therefore peripheral view need not
be high-resolution and high-quality.
Traditionally, head-mounted displays (also called
head-worn displays) have been used to provide highquality
stereo visualization of 3D virtual environments.
However, most HMDs offer limited fields of view, often
only 40° to 60° horizontally and 30° to 45° vertically.
Wide-FOV HMDs have been manufactured, but they are
rare, expensive and heavy to wear. We are aware of no
HMD that can fully cover the human field of view of
approximately 200° horizontally and 135° vertically [3].
Although HMDs are good at providing high-quality stereo
views, the generally narrow FOV has rendered them less
than ideal for providing peripheral views.
The common alternatives to HMDs for immersive
visualization of 3D virtual environments are immersive
projector-based displays, such as the CAVETM [4]. Most
immersive projector-based displays are capable of
providing very wide-field-of-view visualization, and like
CAVETM, some of them are even capable of fully
covering the human field of view. Because of the
relatively large display surfaces and the fact that the user
may move close to them, the image quality and resolution
of such projector-based systems may be insufficient for
applications that require the display of fine details.

Image recognation



Humans currently have substantial performance advantages over machines in several areas, including
object recognition, knowledge representation, reasoning, learning and natural language processing
[RN03]. Intruigingly, most of the hard problems arising in these areas can naturally be cast as
NP-hard optimization problems, with the majority reducible to pattern matching problems such as
maximum common subgraph [Smi99, EV07, Bun00, BDK+08, Sin02]. The formal intractability
of most problems associated with human intelligence is at the heart of the continued difficulties AI
researchers face in mimicking or surpassing human capabilities in these areas.
It may seem surprising that capabilities that we take for granted and perform quite easily could
be computationally intractable. However it is important to remember that this intractability does not
preclude efficient generation of approximate solutions. In practice, exact solutions to optimization
problems arising in AI are not required. Generally there is a graceful degradation of performance
as a solution moves away from global optimality. Because of this behavior the ideal computational
1
Figure 1: Object recognition by image matching proceeds by pairing points in two images that correspond
to the same structure in the outside world. In the algorithms considered here, both feature
similarity and geometric consistency are considered in determining to what extent two images are
similar.
approach is to use specialized heuristic algorithms to attack these problems [Sim95]. It is interesting
to note that human brains are thought to contain structures specialized for pattern matching
(‘wetware heuristics’) that are used to support a variety of capabilities for which humans still hold a
performance advantage over machines, and that these structures have been used as inspirations for
development of successful heuristic algorithms [Sin02, Mou97, Mac91].
In this article we focus on the quintessential pattern recognition problem of deciding whether
two images contain the same object. This is a typical example of a capability in which humans
outperform modern computing systems and can be thought of as an NP-hard optimization problem.
We begin to explore whether quantum adiabatic algorithms [EFS00, CFGG00, BBTA99, SMTC02]
can be employed to obtain better solutions to this problem than can be achieved with classical optimization
algorithms. The first step in this exploration is to map image recognition into the particular
input format required for running quantum adiabatic algorithms on D-Wave superconducting AQC
processors.
2 Image matching
A popular method to determine whether two images contain the same object is image matching.
Image matching in its simplest form attempts to find pairs of image features from two images
that correspond to the same physical structure. An image feature is a vector that describes the
neighborhood of a given image location. In order to find corresponding features two factors are
typically considered: feature similarity, as for instance determined by the scalar product between
feature vectors, and geometric consistency. The latter is best defined when looking at rigid objects.
In this case the feature displacements are not random but exhibit correlations brought about by
a change in viewpoint. For instance, if the camera moves to the left we observe translations of
the feature locations in the image to the right. If the object is deformable or articulate then the
feature displacements are not solely determined by the camera viewpoint anymore but one can
2
Figure 2: Representation of images as labeled graphs. Shown are three exemplary interest points
for each image. The number of interest points detected is content dependent but is on the order
of several hundred for 640x480 resolution images with content as shown. Each interest point is
assigned a position, scale, and orientation [Low99]. In the figure the scale is indicated by a circle
and the orientation by a pointer. This information can be used to characterize the relative pose and
position of two interest points denoted by the vectors ~g next to the dotted lines.
still expect that neighboring features tend to move in a similar way. Thus image matching can
be cast as an optimization problem in which one attempts to minimize an objective function that
consists of two terms. The first term penalizes mismatches between features drawn from image one
and placed at corresponding locations in image two. The second term enforces spatial consistency
between neighboring matches by measuring the divergence between them. It has been shown that
this constitutes an NP-hard optimization problem [FH05].

Quantum Computers




By combining quantum computation and quantum interrogation, scientists at the University of Illinois at Urbana-Champaign have found an exotic way of determining an answer to an algorithm – without ever running the algorithm.


The world's first commercial quantum computer strutted its stuff in Reno, Nevada at the SC07 supercomputing conference. D-Wave Systems Inc. collaborated with Google to demonstrate how quantum computers can perform image recognition tasks at speeds rivalling human capabilities. The Neven-based image recognition and search-by-image capability was acquired by Google when it bought Neven Vision in 2006.
"Our image-matching demonstration, the core of which is too difficult for traditional computers, can automatically extract information from photos?recognising whether photos contain people, places or things?and then categorise the image elements by visual similarity," said Geordie Rose, D-Wave founder and CEO.
Google acquired Neven Vision for its expertise in recognising similarities among photos. Among the image-recognition tasks, the simplest would include determining whether a photo contains a person; the most complex would be accurate classification of images by person, place and thing. Even after tuning the algorithms so that they sidestepped the most difficult image-recognition problems, however, they remained too slow for practical deployment in the Google application.
"We have been collaborating with Hartmut Neven, founder of Neven Vision, since Google acquired it," said Rose. "Neven's original algorithms had to make many compromises on how they did things, since ordinary computers can't do things the way the brain does. But we believe that our quantum computer algorithms are not all that different from the way the brain solves image-matching problems, so we were able to simplify Neven's algorithms and get superior results."