Browsed by
Archives: Recommendations

Recommendation information and details

16/11/2020 – Preporuka za danas

16/11/2020 – Preporuka za danas

Blockchain Explained

Investopedia. Nathan Reiff. Updated Feb 1, 2020

• What is Blockchain?
• How Blockchain Works
• Is Blockchain Private?
• Is Blockchain Secure?
• Blockchain vs. Bitcoin
• Public and Private Key Basics
• Practical Applications
• Pros and Cons of Blockchain
• Disadvantages of Blockchain
• What’s Next for Blockchain?

Pročitajte više
12/11/2020 – Preporuka za danas

12/11/2020 – Preporuka za danas

Artificial intelligence (AI): 5 trends, hype-tested. The Enterprisers Project. By Kate Yuan | September 22, 2020

“Are you exploring how to best implement artificial intelligence in your business? Consider these trends, which are key to a future of practical AI business applications

If you are considering using artificial intelligence (AI) to mature your foundational IT and data capabilities, how do you separate hype from reality?

Whether you are exploring the promises of AI for your business or still wondering when you will see truly transformative results, here are five industry trends that will help realize AI’s untapped potential. Let’s break them down:

  1. Black box vs. explainable AI
  2. Machine learningvs. machine teaching
  3. von Neumann computing vs. neuromorphic computing
  4. Digital vs. quantum computers
  5. Electronic vs. brain-machine interface devices

Pročitajte više
11/11/2020 – Preporuka za danas

11/11/2020 – Preporuka za danas

Demystifying data science roles. Toward data science. Yorgos Askalidis. Aug 6, 2019

“The “data scientist” has cemented its legacy in pop culture as the vague buzzword describing anyone who can access and make any sense of the data that power so many of our experiences today. Data scientists of all types are in, seemingly, hot demand. A search on LinkedIn for “Data Scientist” job postings will return 28 thousand postings for the United States alone. But searches for “Data Analyst”, “Machine Learning Engineer”, even “Data Engineer” and “Data Visualization” will return tens of thousands of postings each.

How can you make sense of the differences between these positions, especially as you’re entering the data science job market for the first time?

In my three years at Spotify I had a change in title, from “Data Analyst” to “Data Scientist”, without my role changing. And from there, a change in my role, from product to finance, without my title changing. New hybrid positions have also been created spanning data science, data engineering, data visualization.

Each of these roles has its own set of stakeholders and requirements of subject-matter expertise.

In this post I’m offering my thoughts and experience on some of the main types of data science jobs that I have come across in the market so you can identify positions best suited to you and interview prep accordingly”.

Pročitajte više
10/11/2020 – Preporuka za danas

10/11/2020 – Preporuka za danas

09/11/2020 – Preporuka za danas

09/11/2020 – Preporuka za danas

BE-TERNA DAY 2020

The End of Average

18.11.2020.  LIVE STREAM

O događaju

“Živimo u dobu »najnovijih vesti«. Svakodnevno se susrećemo sa izrazima nova realnost, neizvesno vreme, nov način komunikacije i poslovanja. I zato se često pitamo – Da li je ovo naša nova normalnost?

Promene na tržištu nikada nisu bile brže i nikada neće biti sporije. Ako pogledamo 30 godina unazad, videćemo da su se promene stalno dešavale – napravljen je prvi računar, nastao je internet, pojavili su se mobilni telefoni i razvile su se nove tehnologije. Međutim, često nismo znali da su ovo promene koje će transformisati poslovanje i naš način života. Činjenica je da su promene normalne, a ne nova normalnost. Na nama je da ih prihvatimo i savladamo na najbolji mogući način.

Pridružite nam se na on-line događaju »The End of Average« i saznajte od eminentnih predavača iz kompanija BE-ternaMainstreamAvalarcCoca-Cola HBCAtlantic Group, Symphony i advokatske kancelarije Naumović & Partneri kako biti nadprosečan u ovim ne tako prosečnim poslovnim vremenima.”

Prijavite se – Free

Pročitajte više
07/11/2020 – Preporuka za danas

07/11/2020 – Preporuka za danas

The 17 Best Free Tools for Data Science. Dataquest, November 8, 2019

“If you’re just getting started, we’ve picked out some of our absolute favorites: the best free tools for data science using Python, R, and SQL.

  1. R

A key benefit of the R language is that it was designed primarily for statistical computing, so many of the key features that data scientists need are built-in.

  1. Python

Like R, Python was also created in the 90s. But unlike R, Python is a general-purpose programming language. It’s often used for web development, and it is one of the most popular overall programming languages. If you learn Python and later decide that software development is a better fit for you than data science, a lot of what you’ve learned is transferable.

  1. SQL

SQL is complimentary language to Python and R — often it will be the second language someone learns if they’re looking to get into data science. Because most of the world’s data is stored in databases, SQL is an incredibly valuable language to learn. It’s common for data scientists to use SQL to retrieve data that they will then clean and analyze using Python or R.

R Packages

R has a thriving ecosystem of packages that add functionality to the core R language. These packages are distributed by CRAN ( Comprehensive R Archive Network) and can be downloaded using R syntax (as opposed to Python that uses separate package managers).  The packages we list below are some of the most commonly used and popular packages for data science in R.

  1. Tidyverse

 Technically, tidyverse is a collection of R packages, but we include it here together because it is the most commonly used set of packages for data science in R.  Key packages in the collection include dplr for data manipulation, readr for importing data, ggplot2 for data visualization, and many more (see picture below).

  1. ggplot2

The ggplot2 package allows you to create data visualizations in R. Even though ggplot2 is part of the tidyverse collection, it predates the collection and is important enough to mention is its own.

ggplot2 is popular because it allows you to create professional-looking visualizations fast using easy-to-understand syntax.

R includes plotting functionality built-in, but the ggplot package is generally considered superior and easier to use and is the number one R package for data visualization.

  1. R Markdown

The R Markdown package facilitates the creation of reports using R. R Markdown documents are text files that contain code snippets interleaved with markdown text.

R Markdown documents are often edited in a notebook interface that allows the creation of code and text side by side. The notebook interface allows the code to be executed and the output of the code to be seen inline with the text.

R Markdown documents can be rendered into many versatile formats including HTML, PDF, Microsoft Word, books, and more!

  1. Shiny

The Shiny package allows you to build interactive web apps using R. You can build functionality that allows people to interact with your data, analysis, and visualizations as a web page.

Shiny is particularly powerful because it removes the need for web development skills and knowledge when creating apps and allows you to focus on your data.

  1. mlr

The mlr package provides a standard set of syntax and features that allow you to work with machine learning algorithms in R. While R has built-in machine learning capabilities, they are cumbersome to work with. Mlr provides an easier interface so you can focus on training your models.

mlr contains classification, regression, and clustering analysis methods as well as countless other related capabilities.

Python Libraries

Like R, Python also has a thriving package ecosystem, although Python packages are often called libraries.

Unlike R, Python’s primary purpose is not as a data science language, so use of data-focused libraries like pandas is more or less mandatory for working with data in Python.

Python packages can be downloaded from PyPI (the Python Package Index) using pip, a tool that comes with Python but is external to the Python coding environment.

(A complementary alternative to pip is the conda package manager, which we’ll talk about later on.)

  1. pandas

The pandas library is built for cleaning, manipulating, transforming and visualizing data in Python. Although it’s a single package, its closest analog in R is the tidyverse collection.

In addition to offering a lot of convenience, pandas is also often faster than pure Python for working with data. Like R, pandas takes advantage of vectorization, which speeds up code execution.

  1. NumPy

NumPy is a fundamental Python library that provides functionality for scientific computing. NumPy provides some of the core logic that pandas is built upon. Usually, most data scientists will work with pandas, but knowing NumPy is important as it allows you to access some of the core functionality when you need to.

  1. Matplotlib

The Matplotlib library is a powerful plotting library for Python. Data scientists often use the Pyplot module from the library, which provides a standard interface for plotting data.

The plotting functionality that is included in pandas calls Matplotlib under the hood, so understanding matplotlib helps with customizing plots you make in pandas.

  1. Scikit-Learn

Scikit-learn is the most popular machine learning library for Python. The library provides a set of tools built on NumPy and Matplotlib for that allow for the preparation and training of machine learning models.

Available model types include classification, regression, clustering, and dimensionality reduction.

  1. Tensorflow

Tensorflow is a Python library originally developed by Google that provides an interface and framework for working with neural networks and deep learning.

Tensorflow is ideal for tasks where deep learning excels, such as computer vision, natural language processing, audio/video recognition, and more.

Software

So far, we’ve looked at the best languages for data science and the best packages for two of those languages. (As a query language, SQL is a bit different and doesn’t use “packages” in the same sense).

Next, we’ll look at some software tools that are useful for data science work. These aren’t all open-source, but they’re free for anyone to use, and if you work with data on a regular basis they can be big time-savers.

  1. Google Sheets

If this were not a list of free tools, then undoubtedly Microsoft Excel would be at the top of this list. The ubiquitous spreadsheet software makes it quick and easy to work with data in a visual way, and is used by millions of people around the world.

Google’s Excel clone has of the core functionality of Excel, and is available free to anyone with a Google account.

  1. RStudio Desktop

RStudio Desktop is the most popular environment for working with R. It includes a code editor, an R console, notebooks, tools for plotting, debugging, and more.

Additionally, Rstudio (the company who make Rstudio Desktop) are at the core of modern R development, employing the developers of the tidyverse, shiny, and other important R packages.

  1. Jupyter Notebook

Jupyter Notebook is the most popular environment for working with Python for data science. Similar to R Markdown, Jupyter notebooks allow you to combine code, text, and plots in a single document which makes data work easy.

Like RMarkdown, Jupyter notebooks can be exported in a number of formats including HTML, PDF, and more.

Dataquest’s guided Python data science projects almost all task students with building projects in Jupyter Notebooks, since that’s what working data analysts and scientists generally do in real-world work.

  1. Anaconda

Anaconda is a distribution of Python designed specifically to help you get the scientific Python tools installed. Before Anaconda, the only option was to install Python by itself, and then install packages like NumPy, pandas, Matplotlib one by one. That which wasn’t always a straightforward process, and it was often difficult for new learners.

Anaconda includes all of the main packages needed for data science in one easy install, which saves time and allows you to get started quickly. It also has Jupyter Notebooks built-in, and makes starting a new data science project easily accessible from a launcher window. It is the recommended way to get started using Python for data science.

Anaconda also includes the conda package manager, which can be used as an alternative to pip to install Python packages (although you can also use pip if you prefer).

Pročitajte više
06/11/2020 – Preporuka za danas

06/11/2020 – Preporuka za danas

Artificial Intelligence in Agriculture: Using Modern Day AI to Solve Traditional Farming Problems. Analytics Vidhya, November 4, 2020.

This article was published as a part of the Data Science Blogathon.

You can write on a wide range of topics including, but not limited to:

  • Machine Learning and Data Science
  • Data Engineering
  • Business Intelligence
  • Deep Learning
  • Data Science in the industry
  • Careers in ML and DS
  • Statistics
  • Python/R/SAS programming
  • And many more!

If you have a prepared article, you have 2 more days to submit it (deadline is November, 8).

Pročitajte više
05/11/2020 – Preporuka za danas

05/11/2020 – Preporuka za danas

Data Science Minimum: 10 Essential Skills You Need to Know to Start Doing Data Science.  Benjamin Obi Tayo, Nov 4, 2019

“This article will discuss 10 essential skills that are necessary for practicing data scientists. These skills could be grouped into 2 categories, namely, technological skills (Math & Statistics, Coding Skills, Data Wrangling & Preprocessing Skills, Data Visualization Skills, Machine Learning Skills,and Real World Project Skills) and soft skills (Communication Skills, Lifelong Learning Skills, Team Player Skills and Ethical Skills).

Pročitajte više
04/11/2020 – Preporuka za danas

04/11/2020 – Preporuka za danas

Top 5 must-have Data Science skills for 2020. Joos Korstanje. Dec 22, 2019

“R, Python, SQL and Machine Learning” has for a long time been the standard job description of a Data Scientist.

To stay competitive, make sure to prepare yourself for new ways of working that come with new tools:

  • Agile way of working based on the Scrum method. It defines several roles for different people and this role definition makes sure that continuous improvement and be implemented smoothly.
  • Git and Github are software for developers that are of great help when managing different versions of software. They track all changes that are made to a code base and in addition, they add real ease in collaboration when multiple developers make changes to the same project at the same time.
  • Industrialization. What is also changing in Data Science is the way we think about our projects: not only think about the accuracy of your model but also take into account time of execution or other industrialization aspects of your project.
  • Cloud. Moving compute resources to external vendors like AWS, Microsoft Azure or Google Cloud makes it very easy to set up a very fast Machine Learning environment that can be accessed from distance. This asks from Data Scientists to have a basic understanding of Cloud functioning, for example: working with servers at distance instead of your own computer, or working on Linux rather than on Windows / Mac.
  • Big Data.  The second aspect of faster IT is using Hadoop and Spark, which are tools that allow for parallelization of tasks on many computers at the same time (worker nodes). This asks for using a different approach to implementing models as a Data Scientist because your code must allow for parallel execution.
  • NLP, Neural Networks and Deep Learning. The use cases for image classification and NLP get more and more frequent even in ‘regular’ business. At current times, it has become unacceptable to not have at least basic knowledge of such models.

Even if you do not have direct applications of such models in your job, a hands-on project is easy to find and will allow you to understand the steps needed in image and text projects”.

 

Pročitajte više
03/11/2020 – Preporuka za danas

03/11/2020 – Preporuka za danas

5 Key Technologies for Accelerating Digital Transformation in Business.  Powell Software. September 17th, 2020.

Today’s customer demands are impossible to meet without technologies that enable accessibility and convenience. Any organization with remote employees – as many now are due to the COVID-19 pandemic and are expected to remain long-term – cannot operate on a daily basis without transitioning to digital platforms.

Digital transformation is not a linear process; by its very nature, it involves constantly adapting to new practices, needs, and environments. If your company hasn’t a strategy yet, it isn’t too late to start. But before you begin, you should first understand the role of digital transformation in business.

Table of Contents

  1. What is digital transformation?
  2. Why digital transformation strategies fail
  3. 5 accelerators for digital transformation in business
    1. The digital workplace
    2. Intranets
    3. Collaborative spaces
    4. Next-generation training
    5. IoT
  4. Digital transformation technologies are only the first step
    1. Implementing a digital transformation strategy with Powell Software

Pročitajte više