The researchers noted that running a computer program that simultaneously tracked all the owner's objects in real-time would be too processor-intensive.
So they based their design around the principle that objects only change locations when humans move them.
As a result the system focuses on tracking human figures and then looking for objects that have changed position in their vicinity.
Although the Kinect sensor's capabilities are limited - it only sees objects up to 11 feet (3.4m) away and only provides "skeleton data" at 15 frames/second - the Kinsight program has commonsense notions built into it to improve accuracy: so it knows that a coffee cup is most likely to be found at a study desk, or kitchen sink, but not inside a bath.
The researchers say accuracy could be improved if Microsoft releases a more powerful Kinect
"This means that, when in doubt, an object recognition algorithm can use this knowledge to identify an object by analysing the likelihood of it being at some location, or looking for the candidate objects in their other locations," the researchers said.
On the move
Algorithms were also created to help the computer learn the appearance of objects and the context they were likely to be used in by analysing the data gathered.
To prove the system worked the two scientists labelled 48 objects - including knives, forks, keys and a Rubik's cube - and identified 80 possible locations around a house.
They then asked volunteers to move the items around according to randomly generated patterns.
The results suggested room for improvement - errors were more likely if the objects were very small, far away, transparent or placed too closely together - but the team said these problems should be addressed by using more sensors per room and adopting more sensitive depth-cameras.
In the meantime, they say that even when the program does lose track of possessions, it can still say were they were last seen which may still prove helpful.