Show simple item record

dc.contributor.authorZhang, Dell
dc.contributor.authorLee, Wee Sun
dc.date.accessioned2004-12-13T08:16:21Z
dc.date.available2004-12-13T08:16:21Z
dc.date.issued2005-01
dc.identifier.urihttp://hdl.handle.net/1721.1/7438
dc.description.abstractCo-training is a semi-supervised learning method that is designed to take advantage of the redundancy that is present when the object to be identified has multiple descriptions. Co-training is known to work well when the multiple descriptions are conditional independent given the class of the object. The presence of multiple descriptions of objects in the form of text, images, audio and video in multimedia applications appears to provide redundancy in the form that may be suitable for co-training. In this paper, we investigate the suitability of utilizing text and image data from the Web for co-training. We perform measurements to find indications of conditional independence in the texts and images obtained from the Web. Our measurements suggest that conditional independence is likely to be present in the data. Our experiments, within a relevance feedback framework to test whether a method that exploits the conditional independence outperforms methods that do not, also indicate that better performance can indeed be obtained by designing algorithms that exploit this form of the redundancy when it is present.en
dc.description.sponsorshipSingapore-MIT Alliance (SMA)en
dc.format.extent148397 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.relation.ispartofseriesComputer Science (CS);
dc.subjectCo-Trainingen
dc.subjectMachine Learningen
dc.subjectMultimedia Data Miningen
dc.subjectSemi-Supervised Learningen
dc.titleValidating Co-Training Models for Web Image Classificationen
dc.typeArticleen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record