Kategorien
AI

Neue McKinsey Studie: das ökonomische Potenzial von generativer AI

McKinsey hat wieder einmal eine Studie veröffentlicht und wie immer muss man diese natürlich in den Kontext einer Unternehmensberatung einsortieren, die ständig auf der Suche nach neuen Betätigungsfeldern ist.

Aber McKinsey ist nun auch einmal kein ganz kleiner Laden und somit können die sich schon aufwändige Studien leisten, die oftmals einen wahren Kern haben. 

Nun also die Veröffentlichung der Studie zum ökonomischen Potenzial von generativer AI.

Die Key-Takeaways sind:

  1. Von den 63 Usecases, die McKinsey analysiert hat, geht ein Produktivitäts-Gewinn weltweit von ca. 2.6 – 4.4  Billionen Dollar erwartet. Das Bruttoprodukt der UK war im Vergleich in 2021 $3.1 billionen. Dies sei aber nur ein Anfang. Über welchen Zeitraum hier gesprochen wird, bleibt unklar. Die 63 Usecases finden sich unter o.A. Link wieder.
  2. 75% dieses Wertes entstehen in den Bereichen Customer Operations, Marketing & Sales, Software Engineering und Research&Development. 
  3. Alle Branchen sind betroffen, vor allem aber wohl Banking, Hightech (was auch immer das sein soll) Life science. Aber auch der Handel soll mit ca $400 – 660 Mrd. profitieren.
  4. Einer der größten Hebel liegt aber in der Veränderung der Jobs: 60-70% der Arbeitszeit, die Mitarbeitende auf wiederkehrende Tätigkeiten verschwenden, werden durch die Möglichkeit mit natürlicher Sprache (Natural Langauge) zu interagieren und diese auch auszugeben, eingespart! Das ist eine Menge. Dies betrifft insbesondere higher paid jobs, sog. Knowledge Worker.
  5. Bis 2045 würde die Hälfte der jetzigen Tätigkeiten automatisiert. Ca. 10 Jahre früher als bei bisherigen Schätzungen.
  6. Wir stehen mit der Veränderung am Anfang – Unternehmen und Politik haben noch Zeit zu reagieren (klar, sie sollen ja auch McKinsey beauftragen 😉 ) 

Gleichzeitig werden in der Studie natürlich auch die Herausforderungen genannt, die jede generative AI mit sich bringt – diese werden allerdings nur stichpunktartig erwähnt:

  1. Fairness und Gleichheit
  2. Intellectual Property Fragen 
  3. Privacy Herausforderungen
  4. Sicherheit gerade auch gg.Manipulation
  5. Nachvollziehbarkeit der Antworten
  6. Zuverlässigkeit der Antworten
  7. Soziale und ökologische Auswirkungen
Kategorien
AI

Quality of data significant for AI results

A new study was conducted regarding the results of different language models. The main outcome is: size doesn’t always matter.

Large lang models (LLM) are trained on up to 530 billion parameters which results in significant cost effetcs. The study shows, that models with much smaller parameters like chinchilla (70 billion parameters) outperform ther colleagues, especially when raising training tokens.

This is the 5 Shot performance of differrent models

The conclusion we can draw from this are:

  1. it is indeed possible to use only publicliy avalable data to train a perfectly working language model. AI is going to stay, regardless the licensing-wars we will see with OpenAi etc.
  2. It is possible for companies to add their own „language“ to existing models at a doable pricetag
  3. You should not stick to one model, buzt be flexible and interchangable with the results by testing, testing, testing.

(Dall-E prompt for header picture: "Create a picture where the language model "Goliath" is being beaten by the language model "Chinchilla", make that a fantasy picture and Goliath being a big, fat bear, as where Chinchilla is a very strong mouse.")

Kategorien
AI

AI als Infrastrukturfrage

Nico hat vor ein paar Tagen drüben bei LinkedIn die steile These aufgemacht, dass AI nicht als Abkürzung für Artificial Intelligence, sondern für Augmented Intelligence stehen sollte. Und damit hat er aus meiner Sicht komplett recht und einen wichtigen Punkt: Künstliche Intelligenz sollte „nicht in einem bedrohlichen Kontext gesehen werden“, sondern eher als „Erweiterung unserer geistigen Möglichkeiten“. Das geht dann in die gleiche Richtung wie vor 30 Jahren: Wenn es Wikiepdia gibt, warum sollte man noch irgendwas auswendig lernen – man muss nur wissen wo es steht.

Victor hat dann das Thema Infrastrukur in die Diskussion eingebracht, die ich sehr spannend finde. Es sind ja im Moment die Modelle wie OpenAI und Bard, die mit ihrer Leistungsfähigkeit für Aufmerksamkeit sorgen – aber die Implementierung in den Business Kontext (wenn er das meinte) sehe ich noch nicht. Ja, Microsoft bringt natürlich mit Azure eine 0365 nahe Entwicklerplattform mit sich – und Copilot integriert OpenAI in verschiedene Officetools. Aber ist das dann schon das gelobte Land? Viele Unternehmen werden sich fragen:
1. Was passiert mit dem ganzen Silowissen in den Datenbanken, FAT Applikationen. Wie kommt es da raus und wird für meine Mitarbeitenden und Kunden nutzbar gemacht?

2. Wie sieht das Interface aus? Ist es wirklich der Chat, der als virtueller Assistent „neben“ mir steht?

3. Wie messe ich die Performance der AI im Unternehmen?

4. Wie stelle ich den Wahrheitsgehalt fest und mache ihn transparent?

5. Gibt es genau EIN LLM das zu mir passt?

6. Und wie integriere ich die Business-Prozesse mit der AI?

Denke, da ist noch Platz für viel Infrastruktur um Augmented Intelligence für Unternehmen wirklich sinnvoll nutzbar zu machen.

Kategorien
AI

Getting better at SQL with ChatGPT or: the lazy mode for complex queries.

As I mentioned previously, ChatGPT is quite good (but not only at) at SQL. I think its a great opprotunity to really learn to code (if you want to call SQL querying „coding“ but that’s another discussion). I mean, only this information is SO valuable, I would have searched stackoverflow for hours finding the reason for my SQL bug:

I now can change the query or update mysql.

So here is my lazy setup to create complex SQL queries:

  1. I am using the graphical query builder of metabase for all the complex joining of data

2. I then review the result and convert the results to SQL.

3. For the lazy mode I then paste the sql to ChatGPT, and asking it to add modifications, adjust it – and all in natural language.

Kategorien
AI IT

ChatGPT and SQL

ChatGPT and SQL seem to be a natural ally: one can write queries in natural language using ChatGPT and then getting the SQL code to edit in your favorite SQL editor. In my case, I am a keen user of Metabase for doing any kind of BI stuff directly in the database and from there modifying the queries with ChatGPT. What I find _very_ special is, that ChatGPT makes mistakes – and bluntly apologizes for it and corrects itself. That’s amazing. 

ChatGPT apologizing and correcting itself
Kategorien
AI

Should we use Generative AI like ChatGPT in Journalism, schools or communication?

As it comes out, CNET has been using Generative or Assistive AI for month to create articles or better said, to „assist their authors“ in writing articles.

Together with the recent announcement of Microsoft integration ChatGPT into their suits (namely Outlook etc.) we will step into an age of text not only being created by humans exclusively anymore. 

In my opinion we will still find differences and nuances in solely AI generated texts and human generated text – but the assistive function of AI will have an impact on the style and especially length of texts, as long format will become vogue again due to a higher writing efficiency of AI Assistants. 

When using Microsoft Githubs Copilot, my writing of code also increased, and I can imagine a similar effect when writing ChatGPT powered texts in e.g. Word. We already have that since years in the Google Search suggestion box, and we all love it, albeit this will expand to whole text blocks.

Generative AI as an Assistant to enhance writing productivity – it compares to me like the calculator in school: I can calculate in my mind, but the calculator does it better and faster. Nonetheless I still need to figure out, what should be calculated.

Kategorien
AI IT

How does an AI Strategy fit into an IT and Business strategy?

An AI strategy is a plan for how an organization will use artificial intelligence to achieve its goals. It fits into a business strategy by identifying specific business problems that AI can help solve, and outlining the steps that will be taken to implement AI solutions. The AI strategy also fits into an IT strategy by outlining the technology and infrastructure that will be needed to support the AI solutions.

An AI Stratgey is part of an IT and Business Strategy

Example Retail

For example, a retail company may use AI to improve its customer service by implementing a chatbot that can answer customer questions and help them find products. In this case, the AI strategy would be a part of the company’s overall business strategy to improve customer satisfaction. The IT strategy would need to include the implementation of the necessary technology, such as the chatbot software, and the integration of the chatbot with the company’s existing systems.

Example Healthcare

Another example, a healthcare company may use AI to improve patient outcomes by developing predictive models that can identify patients at high risk of certain conditions. In this case, the AI strategy would be a part of the company’s overall business strategy to improve patient care. The IT strategy would need to include the implementation of the necessary technology, such as the predictive modeling algorithms and the necessary integration with the company’s existing systems.

Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

Kategorien
AI business IT

Prediction: in 2023 we will finally see the beginning of a wider business adoption of machine learning and AI Services – and here’s why

At the end of 2022 ChatGPT made its way into the news and created a lot of fuzz.

The reason was, that OpenAI, the company behind ChatGPT, developed a new frontend for its generative learning model GPT-3. GPT-3 was being released one year earlier and has already been the largest model ever created. It was only accessible by an API though, for which one needed to make it through a waiting list. ChatGPT changed the game as being an easy to use, free for everybody „chat“ interface to interact with the GPT-3. Many users for the first time understood, what Machine Learning, or AI Services, are capable of: they created poems, let ChatGPT write yet another StarWars movie script and many other funny things. But understanding the underlying achievements OpenAI was able to come up to, are nothing less than stunning – and will teach many businesses what benefits AI Services can bring.

ChatGPT has made visible the potential that AI services have when they are skillfully combined, or the models that have technically been around for years are trained in a set with data that was previously unthinkable. GPT-3 contains about 10x as much data as previous models. More specifically, GPT-3 consists of multiple models and techniques like semi-supervised learning or trasnformers, that have been combined together intelligently – and that’s the fascinating part.

Generally, until now, there were a number of „capabilities“ that an AI model brought to the table, e.g. the classics like sentiment analysis („What is the sentiment in a certain text?“) or classification („Is the text a question or a statement?“).
This is now different: GPT-3 can not only do the above, but also learn new things very quickly with high efficiency and accuracy. This is called the Zero-, One- or Few-Shot capabilities of a model. Here GPT-3 achieves incredibly good values. This means, for example, that you can teach it to translate into a new language in just 3 „training sessions“, and from then on the model does it itself.

Why this is so important for companies: the ability to (autonomously) learn and adapt.

Every company claims to be unique. This may be the case in some areas, but often it is the cross-functional areas (IT, HR, Finance, etc.) that are essentially the same. The HR department of a bank does not do much different than the HR department of an automotive supplier. This also explains the success of the „general“ office products like Excel and Co. that are used in all companies (a spreadsheet like Excel, by the way, can be compared structurally well with an AI model). But WHAT is calculated in an Excel, that changes from company to company.
Modern AI architectures like GPT-3 are now able to learn exactly this by themselves:
1. what is my company specific data to work on?
2. what are my company-specific questions that I should answer?
3. what are my company-specific added values that I should deliver?

These capabilities, which ChatGPT now presents to users in a very concrete way, are what will now drive the entry of AI into companies. Because the above results are simply „shocking“ in a positive sense.
I see three areas in particular where we will see AI services much more often very soon:
1. integrated AI: e.g. directly integrated in a software to make predictions (besipiel Salesforce AI service that directly qualifies a lead).
2. standalone AI services (e.g. ChatBot that answers customer service questions on its own)
3. generating AI services: Corporate communications, marketing copytexts, sales presentations that a service creates autonomously and is only approved or tuned afterwards by a „real“ employee.

The productivity gains are enormous and the knowledge about the introduction of AI services, which skills and teams are needed, will also spread. Because one thing should be clear to everyone: AI Services are far more than a technical tool that can be introduced, but to an even much greater extent a corporate change than all „digitization measures“ combined. Digitization, compared, was a wet fart 

Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

Kategorien
AI IT

Going Deeper: how to build and train your own models using neural networks with PyTorch or TensorFlow

First of all, Deep learning is a subfield of machine learning that involves using neural networks to build models that can process and make predictions on data. These neural networks are typically composed of multiple layers, with the first layer receiving input data and each subsequent layer building on the previous one to learn increasingly complex representations of the data.

Technically, deep learning models are trained by presenting them with large amounts of data and adjusting the model’s parameters to minimize a loss function, which measures the difference between the model’s predicted output and the correct output. This process is known as gradient descent, and it typically involves using algorithms such as backpropagation to compute the gradient of the loss function with respect to the model’s parameters.

In contrast to machine learning, there’s no manual feature classification on the input data needed:

In contrast to Machine Learning (ML), Features do not have to be marked manually in Deep Learning (DL). Deep Learning Algorithms are capable of identifying features themselves and identify this example as „house of Nikolaus“

Here is an example of code for training a deep learning model using the PyTorch library:

# Import the necessary PyTorch modules
import torch
import torch.nn as nn
import torch.optim as optim

# Define the neural network architecture
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(10, 32)
        self.fc2 = nn.Linear(32, 64)
        self.fc3 = nn.Linear(64, 128)
        self.fc4 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.fc1(x)
        x = nn.functional.relu(x)
        x = self.fc2(x)
        x = nn.functional.relu(x)
        x = self.fc3(x)
        x = nn.functional.relu(x)
        x = self.fc4(x)
        return x

# Create an instance of the neural network
net = Net()

# Define the loss function and the optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.01)

# Train the model
for epoch in range(100):
    # Iterate over the training data
    for inputs, labels in train_data:
        # Clear the gradients
        optimizer.zero_grad()

        # Forward pass
        outputs = net(inputs)

        # Compute the loss and the gradients
        loss = criterion(outputs, labels)
        loss.backward()

        # Update the model's parameters
        optimizer.step()

This code creates a neural network with four fully-connected (fc) layers, trains it on some training data using stochastic gradient descent (SGD), and optimizes the model’s parameters to minimize the cross-entropy loss. Of course, this is just a simple example, and in practice you would want to use more sophisticated techniques to train your deep learning models.

A basic code example using TensorFlow to define and train a deep learning model may look like this:

# Import necessary TensorFlow libraries
import tensorflow as tf
from tensorflow.keras import layers

# Define the model architecture
model = tf.keras.Sequential()
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

# Compile the model with a loss function and an optimizer
model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss=tf.keras.losses.SparseCategoricalCrossentropy(),
              metrics=['accuracy'])

# Load the training data and labels
train_data = ...
train_labels = ...

# Train the model on the training data
model.fit(train_data, train_labels, epochs=5)

In this code example, the first two lines import the necessary TensorFlow libraries for defining and training a model.

The next three lines define the architecture of the model using the Sequential class and the Dense layer. The model has three dense layers with 64 units each, using the ReLU activation function for the first two layers and the softmax activation function for the final layer.

The compile method is used to specify the loss function and optimizer for training the model. In this case, we are using the SparseCategoricalCrossentropy loss function and the Adam optimizer.

Next, the training data and labels are loaded and the fit method is used to train the model on the data for 5 epochs. This will run the training process and update the model’s weights to improve its performance on the training data.

Once the model is trained, it can be used to make predictions on new, unseen data. This can be done with the predict method, as shown in the following example:

Copy code# Load the test data
test_data = ...

# Make predictions on the test data
predictions = model.predict(test_data)

In this code, the test data is loaded and passed to the predict method of the trained model. The method returns the predicted labels for the data, which can then be compared to the true labels to evaluate the model’s performance.

PyTorch or Tensorflow?

Whether you want to use PyTorch or Tensorflow for creating, training and asking your neural network, might be based on personal or usecase related preferences, but there are some subtle differences to it:

  1. Ease of use: PyTorch is generally considered to be more user-friendly than TensorFlow, particularly for tasks such as building and training neural networks. PyTorch provides a high-level interface for defining and training models, while TensorFlow can be more verbose and require more boilerplate code.
  2. Performance: TensorFlow is generally considered to be more efficient and scalable than PyTorch, particularly for distributed training and serving models in production. TensorFlow also has a number of tools and libraries for optimizing performance, such as the XLA compiler and TensorRT.
  3. Community: TensorFlow has a larger and more established community, with more resources and support available online. PyTorch is a newer framework and is rapidly growing in popularity, but it may not have as much support as TensorFlow.

Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

Kategorien
AI business IT

How Tensorflow can help HR departments to streamline their processes

TensorFlow is a powerful open-source tool funded by Google, that can help HR departments in a variety of ways. At its core, TensorFlow is a machine learning platform that allows users to build and train complex models using large amounts of data. This ability to process large amounts of data quickly and accurately makes TensorFlow an ideal tool for HR departments looking to improve their processes and make more informed decisions.

Tensorflow and Recruiting

One of the key ways that TensorFlow can help HR departments is by automating and improving the process of recruitment and selection. By training a model on large amounts of data (e.g. from SAP SuccessFactors, Workday etc.), HR departments can use TensorFlow to identify the most important factors in determining a successful candidate and automate the process of sifting through resumes and applications. This can save HR departments a significant amount of time and resources, and allow them to focus on other important tasks.

TensorFlow and Performance Management

Another area where TensorFlow can be useful for HR departments is in performance management. By training a model on data about an employee’s past performance, HR departments can use TensorFlow to identify patterns and trends that may indicate an employee’s potential for future success. This can help HR departments make more informed decisions about promotions, salary increases, and other important decisions related to employee performance.

TensorFlow can also be used to improve the accuracy and fairness of salary and compensation decisions. By training a model on data about an employee’s past performance, job responsibilities, and other factors, HR departments can use TensorFlow to identify any potential biases or inconsistencies in their current compensation practices. This can help HR departments ensure that their compensation decisions are fair and based on objective criteria, and can help to prevent discrimination and other potential legal issues.

TensorFlow and Reportings

In addition to these specific applications, TensorFlow can also help HR departments in more general ways. For example, TensorFlow can be used to automate and improve the process of generating reports and analytics, which can help HR departments make more informed decisions about the effectiveness of their policies and practices. Additionally, TensorFlow can be used to identify potential issues and trends within an organization, such as high turnover rates or low employee satisfaction, and provide HR departments with the information they need to address these issues.

TensorFlow to identify potential leaving employees

Traditional methods of predicting employee turnover often rely on manual analysis of a small number of data points, such as employee performance reviews or exit interviews. This can be time-consuming and may not provide a complete picture of an employee’s likelihood of leaving the company.

TensorFlow, on the other hand, can analyze vast amounts of data from various sources, including employee performance data, demographics, and other relevant factors. This allows HR departments to gain a more comprehensive view of an employee’s likelihood of leaving the company, enabling them to make more informed decisions about retention strategies. Traditional methods of predicting employee turnover may not be able to identify subtle patterns or trends that could be indicative of an employee’s likelihood of leaving the company. TensorFlow, on the other hand, can identify these patterns and trends, providing HR departments with valuable insights into the factors that may be contributing to employee turnover.

From Re-Action to Action: act on an employee, before he leaves.

One example of how TensorFlow can be used in the area of employee turnover prediction is through the development of a predictive model. This model could be trained using a large dataset of employee data, including factors such as performance metrics, demographics, and job satisfaction. The model could then be used to predict the likelihood of an individual employee leaving the company, based on the data provided: the model may identify that employees with low job satisfaction are more likely to leave the company. HR departments could then implement strategies to improve job satisfaction, such as offering training or career development opportunities, in an effort to reduce employee turnover.Another potential in the area of employee turnover prediction is through the development of an employee turnover dashboard. This dashboard could provide HR departments with a visual representation of employee turnover data, allowing them to easily identify trends and patterns. The dashboard could also provide HR departments with real-time alerts when an employee is at risk of leaving the company, allowing them to take immediate action to retain the employee.

TensorFlow vs. Azure Cognitive Services in HR processes

As stataed above, TensorFlow but also Azure Cognitive Services are both powerful tools for machine learning and artificial intelligence (AI) applications. While TensorFlow is an open-source library for machine learning and deep learning applications, Azure Cognitive Services is a suite of AI services provided by Microsoft. Both tools have their own advantages and disadvantages, which should be considered when deciding which to use for a particular project.

One major advantage of TensorFlow is its flexibility. TensorFlow allows developers to build and train their own custom machine learning models, which can be tailored to specific applications and data sets. This flexibility can be particularly useful for complex projects that require specialized models or algorithms.

Another advantage of TensorFlow is its ability to handle large amounts of data. TensorFlow is designed to scale to large data sets, allowing it to handle large volumes of data without sacrificing performance. This makes it ideal for projects that require the analysis of large amounts of data, such as natural language processing or image recognition.

However, TensorFlow also has some disadvantages. One of the main disadvantages of TensorFlow is its complexity. TensorFlow is a powerful tool, but it can be difficult for beginners or unexperienced IT deaprtments to learn and use. In order to use TensorFlow effectively, developers need to have a strong understanding of machine learning algorithms and techniques, as well as experience with programming languages such as Python.

In contrast, Azure Cognitive Services is a more user-friendly tool. Azure Cognitive Services provides pre-trained machine learning models that can be easily integrated into applications without the need for extensive programming knowledge. This makes it a good choice for developers who are new to machine learning or who want to quickly add AI capabilities to their applications.

Another advantage of Azure Cognitive Services is its availability. Azure Cognitive Services is available as a cloud-based service, which means that developers can easily access and use the service without the need to install any software or hardware. This can be particularly useful for developers who are working on projects that require fast deployment or who do not have access to dedicated machine learning hardware.

However, Azure Cognitive Services also has some disadvantages. One major disadvantage of Azure Cognitive Services is its cost. Azure Cognitive Services is a subscription-based service, which means that developers need to pay for the service on a monthly or annual basis. This can be expensive, especially for projects that require the use of multiple Azure Cognitive Services.

Another disadvantage of Azure Cognitive Services is its lack of flexibility. Because Azure Cognitive Services provides pre-trained models, developers are limited to using the models that are provided by the service. This can be limiting for projects that require custom models or algorithms.

In conclusion, TensorFlow and Azure Cognitive Services are both powerful tools for machine learning and AI applications. TensorFlow offers flexibility and the ability to handle large amounts of data, but it can be complex and difficult to use. Azure Cognitive Services is user-friendly and available as a cloud-based service, but it can be expensive and lacks flexibility. The best choice between the two will depend on the specific requirements of the HR project and the experience and expertise of the development team.

In my company my-vpa.com, which basically is a HR Tech company, we mainly use Azure and AWS Comprehend for our HR processes. So for example we implememented an AI powered zero-touch recruiting process which is capable of recruiting up to 200 Assistants per month.

Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de