Volume- 2
Issue- 6
Year- 2014
Article Tools: Print the Abstract | Indexing metadata | How to cite item | Email this article | Post a Comment
Reshma Sheik , Dr A. Chitra
The vast information available in the web data is mostly inaccessible for blind and visually impaired since most information retrieval systems do not cater to their needs. The retrieval of the relevant information from multiple web documents in a synoptic form can be highly beneficial. A system with speech user interface for information retrieval of web-data in summarized form was developed. The system is speaker-independent and uses a domain specific language model. Query is formulated based on user's input which can be either in text or in voice form. The query is processed by a search engine, like Google, and a fixed number of relevant web documents are parsed and the textual content is extracted from them. This textual content is summarized and its corresponding speech along with the text is presented to the user. Through this method an enhanced solution which offers a natural and user-friendly way of communication for visually-impaired people is intended. The system is easy to be implemented in real time and its functioning is simple as compared to other existing systems.
[1] FuBai-wen, Junying, Akita Y, “Research on accessible design and development of information system for visually impaired people” In: Proc. IEEE Conferences on Computer Application and System Modeling (ICCASM), 2010 Volume:14 pp. 299-303.
[2] Joshi Kumar, A.V. Visu, A.Mohan Raj, S. Madhan Prabhu, T. Kalaiselvi, “A pragmatic approach to aid visually impaired people in reading, visualizing and understanding textual contents with an automatic electronic pen” In: Proc. IEEE Conference on Computer Science and Automation Engineering (CSAE), 2011 pp. 623-626.
[3] Halimah, B.Z. Azlina, A. Behrang, P. Choo, “Voice recognition system for the visually impaired: Virtual cognitive approach” In Proc. IEEE Conference on Information Technology, 2008,Volume 2 pp. 1- 6.
[4] Simon Dobrisek, Jerneja Gros, Bostjan Vesnicer, “Evolution of the Information-Retrieval System for Blind and Visually-Impaired People”, In International Journal of speech Technology, 2003 Volume 6, pp. 301-309.
[5] S.Asha, C.Chellappan, “Automatic processing of audio lectures for information retrieval Vocabulary selection and language modeling”, International Journal of Computer Applications 2011,Volume 14.
[6] Kirsty, Steve Wright, “Internet for Blind and Visually Impaired” Enterprise Information Research Group, Monash University JCMC 2007.
[7] Pisit, Prougestapor, “Development of A Web Accessibility Model For Visually-Impaired Students On Elearning Websites”, In International Conference on Educational and Network Technology (ICENT), 2010.
[8] M.Zajicek, C. Powell, “Web search and Orientation with Brookestalk, Royal Institute for Blind”, 2007.
[9] Veera Raghavendra, Anand Arokia, “RAVI: Reading Aid for Visually Impaired”, International Institute of Information technology, Hyderabad, January 2008.
[10] T. V. Raman, Charles L. Chen, “Eyes Free User Interaction”. Available: http://emacsspeak.sf.net Google, May 21, 2009.
[11] Willie Walker, Paul Lamere, Philip Kwok, Bhiksha Raj, Rita Singh, Evandro Gouvea, Peter Wolf, Joe Woelf. “Sphinx-4: A Flexible Open Sourc Source Framework for Speech Recognition”,2004, Available: http://cmusphinx.sourceforge.net/sphinx4. http://cmusphinx.sourceforge.net/sphinx4/javadoc/edu/cmu/sphinx/jsg f/JSGFGrammar.html.
[12] Kohlschtter, Peter Fankhauser, Wolfgang Nejdl, “Boilerplate Detection using Shallow Text Features”, L3S Research Center Leibniz University Sannover, Germany ACM Journal February,2010.
Former Student, Department of Computer Science, PSG College of Technology, Coimbatore, India, +91 9495970661,
No. of Downloads: 2 | No. of Views: 947
Yufeng Li, Weimin Wang, Xu Yan, Min Gao, MingXuan Xiao.
March 2024 - Vol 12, Issue 2
Abhijit Pathak, Arnab Chakraborty, Minhajur Rahaman, Taiyaba Shadaka Rafa, Ummay Nayema.
March 2024 - Vol 12, Issue 2
Prof. Balwante S S, Rohit Maurya, Rahul Sharma, Anuj Dubey.
March 2024 - Vol 12, Issue 2