Top 5 Empowering Consultants in Year 2023
One of many reasons why a customer chooses a particular service over others in the plethora of modern market competition is a positive product experience that meets the needs. Instinctive experience, rational plan, and client centered openness these add to why a client decides to collaborate with a help from a business.
Shyamala Prayaga, Senior Software Product Manager at NVIDIA and self-driven evangelist for UX and voice technology, provides cutting-edge services that improve usability because she recognizes the societal and cultural benefits of customer satisfaction.
She brings experience, knowledge, and leadership from her roles in technical development and design. Her design and research are presented to general and specialist audiences both nationally and internationally. She has worked on mobile, web, desktop, and smart TV interfaces for 18 years.
Shyamala has also worked on voice interfaces for Connected Home Experiences, Automotive, and Wearables for more than six years.
Shyamala gave Business Leaders Review a wonderful interview in which she talked about how much she has helped the user experience and voice technology industries.
The interview’s highlights are as follows:
Shyamala, tell our readers more about your professional history and the innovations you’ve made to improve voice and user experience technologies.
As Conversational AI’s Senior Software Product Manager: Deep Learning, I control the roadmap for NVIDIA’s Speech AI GUI product suites and collaborate with cross-functional teams to make it a reality. As a Product Owner Digital Assistant, I led the roadmap for voice and chatbot innovation in both conventional and autonomous vehicles at Ford Motor Company. Ford’s F-150 and Mustang Mach-e models now include my product, the SYNC 4 Digital Assistant, an intelligent, voice-activated in-vehicle assistant.
I led user experience for voice applications at Amazon and Voicebox Technologies prior to joining Ford. I was a part of the 2014 launch of Alexa Gen 1, a voice-activated smart speaker that became an instant hit. My profession started quite a while back as a Client Experience (UX) creator planning portable, web, savvy television, and work area applications. Among my first mobile applications were VidyoMobile, Citibank, and Toyota Shopping Tool. I’ve created voice interfaces for connected cars, wearables, and homes over the past ten years.
Describe yourself in greater detail, highlighting the exceptional skillset that makes you one of the most impressive tech leaders enabling modern industry advancements.
Even when I was in college, I was always fascinated by artificial intelligence. I have always focused on developing user experiences for cutting-edge technologies throughout my career. My journey with Conversation AI began a decade ago when I prototyped the “Junglee Shopping Tool” augmented reality application for an Amazon hackathon.
There are as yet many individuals who feel really awkward purchasing things online on the grounds that they need to give them a shot first. I imagined enabling e-trials to encourage online shopping by utilizing emerging technologies like voice and augmented reality. I was awarded the “people’s choice award” because of the proof of concept. The idea was later introduced by numerous major online retailers, including Amazon.
A research project called “Omnichannel Digital Assistant” was funded by me and the Department of Transportation to make it easier for people with disabilities to use autonomous vehicles. My idea was to use adjustable touch screens with tactile surfaces and conversational AI technologies like Automated Speech Recognition, Text-to-Speech, Natural Language Processing, Automated Sign Language Detection, and Voice Biometrics to give people with disabilities the most control and utility possible in the self-driving car.
I believe that products should be accessible to everyone, regardless of age, abilities, or circumstances. The most important driving rule, in my opinion, will be altered by self-driving cars: A qualified and licensed driver is required to operate the vehicle. Vehicles that can work themselves won’t require an authorized driver. An autonomous vehicle can be used by anyone who can enter it and tell it where to go.
I introduced the five pillars that will make autonomous vehicles more inclusive: Understanding, recognition, response, trust, independence, and understanding “When a vehicle becomes the driver, the voice becomes the companion,” in my opinion. Omnichannel digital assistants have the potential to revolutionize kiosks, retail establishments, and the automotive industry. The right orchestration is all that is required.
Although voice interfaces have made a lot of progress and are now the most natural way to communicate, there are still issues with language, understanding, and recognition. The products lack trustworthiness and utility as a result. My most recent book, “Emotionally Engaged Digital Assistant: Humanizing Technology and Design.
The book provides a number of essential frameworks for putting voice interfaces into action that foster trust. Emotional engagement is based on six guiding principles: ease, compassion, openness, connection, self-assurance, and joy. I explain in the book how the six principles and the framework can be used to humanize technology and design.
Taking into consideration the most recent advancements in conversational AI and deep learning, please provide your valuable perspective on the ways in which these new technologies guarantee reliability in modern operations.
The proliferation of voice-enabled assistants is evidence of our long-held dream of having a voice assistant with which we could communicate. It is now possible to converse with these voice-activated assistants and automate routine tasks thanks to advancements in deep learning and conversational AI. For my book “Emotionally Engaged Digital Assistant – Humanizing Design and Technology,” I conducted a survey of over a hundred people and found that almost everyone owns at least four voice-enabled devices and uses them to set alarms, schedules, and reminders.
Humans communicate most naturally through speech. As opposed to innovation use, discourse requires no expectation to learn and adapt. AI and deep learning advancements are making it easier to communicate with these voice assistants. Instead of relying on ruled interfaces, we can now converse with these assistants in a natural and conversational manner as AI advances.
The use cases for conversational AI technology range from customer service use cases to automotive applications. It reduces the distractions that come with using smartphones while driving by allowing customers in the automotive industry to get directions, play music, and learn about nearby attractions. Voice-activated drive-throughs are currently being tested by retailers in an effort to improve their ordering and fulfillment processes.
With the progression in text-to-discourse, we can now create excellent emotive engineered voices with tiny information contrasted with days of information. TTS provides voices for people with disabilities in addition to assisting content designers who produce audio content. Bots and humans can now converse with one another in a high-quality manner thanks to advancements in natural language processing. This is upheld by huge language models. This content generation can be beneficial to applications like virtual assistants, IVR systems, and others.
Tell us about the social and cultural benefits of voice technology evolution as a professional technology advocate.
I interviewed a person with cerebral palsy who uses a wheelchair to get around during one of my podcast episodes, “The future is spoken.” He can’t do many everyday tasks well because he can’t move. He voice-enabled his entire home as a first-time father expecting a child so he could enjoy fatherhood as any other person would. He built a voice-activated, height-adjustable cradle to help him hold and put his baby back in his arms. It shows how voice-enabled technology can help everyone and make access possible for everyone.
Voice technology is being investigated in numerous contexts, including urban innovation and women’s safety. Seniors living in senior housing are utilizing voice assistants for companionship and emergency assistance. The purpose of a study is to use voice to detect COVID-19 and Parkinson’s disease, which can be identified by how certain phonemes are pronounced. Voice technology is being looked into in the rural areas of India to help the digitally illiterate find information about agriculture.
There are numerous advantages to voice technology, and these investigations demonstrate that numerous possibilities remain unexplored.
What is your primary responsibility at NVIDIA, and what motivates you to promote enhancements in the voice and user experience niche?
As Senior Software Product Manager for Conversational AI, I am responsible for: Deep Learning, I am in charge of the plan and vision for NVIDIA’s Speech AI GUI product suite, which gives customers the ability to personalize Speech AI with self-service features that don’t require any code at all. Customers are increasingly requesting the ability to create their own synthetic voices in the conversational AI industry.
The voice of a cancer patient can be preserved and the voice of a metahuman powered, among other possibilities. Traditionally, in order to train the model and produce a clone of production-grade quality, creating a custom voice necessitates hours of data and technical expertise. The standard is being lowered by low-code and no-code platforms, allowing more people to personalize their voices with little data and coding.
Initially, I majored in architecture, but I switched to user experience because I enjoy simplifying intricate user interactions. As a child, I was inspired to design products that are usable by watching my parents struggle with technology. Regardless of the number of extraordinary elements an item has, it does not merit the expense on the off chance that it isn’t usable. My firm belief is that every product’s user experience is its soul. My designs are based on the idea that they should be understandable by fifth-graders. If it weren’t easy for a fifth grader, the average person wouldn’t understand.
NVIDIA has significantly altered the technological landscape thus far; How is your expertise assisting the business in eventually scaling its progress to greater heights?
I contend that the user experience is the very essence of any product. If the product is simple to use, there will probably be more feature requests and ongoing utility requests. If a product can’t be used, customers will stop buying it.
I design every feature and product from the user’s point of view. Each step of my plan cycle includes broad client exploration and approval with clients. At the point when an item is planned in view of the client, it will continuously be usable. This works with utility as well as adaptability.
What advice would you give to young people who want to work on voice-assistive software and enter the user experience (UX) field soon?
Take part in meetups and events in the industry, meet new people, and learn about conversational AI and UX by reading books, listening to podcasts, and attending events. Learn about important ideas, tools, and procedures. Create sample applications and capstones with the knowledge you’ve gained. Make sure you know how to use your skills. No matter how much you read and study, confidence cannot be gained until it is put into practice. Learn about experts in the field and what they do every day. Establish connections that matter.
Where do you see yourself in the future, and how are you working toward your future objectives in this field?
It is as yet the good ‘ol days for the conversational computer based intelligence industry. Innovation enablement and improvement are still far off, as is characterizing brought together norms and taking on them across areas. I want to bring them all together to make an all-inclusive, omnichannel experience.