Algorithms are everywhere, whether you are writing software, analysing a genome, predicting traffic jams, producing automatic movie recommendations, or just surfing the internet, you are dealing with algorithms. Every single branch of computer science uses algorithms, so algorithms and data structures are an essential part of any CS curriculum.
It is important that the algorithms that we use are efficient as users want to see the search results in the blink of an eye even if they search through trillions of web pages. A poorly thought-out algorithm can take literally centuries to process all the web pages indexed by a search engine or all the Facebook posts and thus algorithmic improvements are necessary to make these systems practical.
That is why tech companies always ask lots of algorithmic questions at the interviews. In data science problems like ranking internet search results, predicting road accidents and recommending movies to users, advanced algorithms are used to achieve excellent search quality, high prediction accuracy and to make relevant recommendations.
However, even for a simple machine learning algorithm like linear regression to be able to process big data is usually a challenge. When advanced algorithms such as deep neural networks are applied to huge datasets, they make extremely accurate predictions recently starting to even outperform humans in some areas of vision and speech recognition but getting those algorithms to work in hours instead of years on a large dataset is hard and performing experiments quickly is crucial in data science.