Kategorien
life

White lotus: the instagram style series of our time

Having binged The White Lotus season 1 I have to say, it reminds me of a long-format Instagram movie. Filters being applied to the scenery are just beautiful, but I find them sometimes a bit too much. The soundtrack, though, is incredible. It got 10 emmys recently.

Update: I found series 2 even better than series 1 – it’s more complex and the ending is…. 😉

Kategorien
AI IT

ChatGPT and SQL

ChatGPT and SQL seem to be a natural ally: one can write queries in natural language using ChatGPT and then getting the SQL code to edit in your favorite SQL editor. In my case, I am a keen user of Metabase for doing any kind of BI stuff directly in the database and from there modifying the queries with ChatGPT. What I find _very_ special is, that ChatGPT makes mistakes – and bluntly apologizes for it and corrects itself. That’s amazing. 

ChatGPT apologizing and correcting itself
Kategorien
IT life

Does it make sense to have both a private and a computer for work?

I am in front of my computer most of the day. I am using a Macbook Pro with Studio Display, Magic Mouse and Magic Keyboard. My Software setup is as follows:

  1. I am using Apple Mail since ages – its fast and I really love the search function
  2. I recently switched to Visual Studio for coding due to the discontinuity of Atom
  3. Google Workspace for most Office related stuff
  4. Excel for calculating things
  5. I am fully in Safari, embracing the deep integration of KeyChain
  6. Apple Calendar and Reminder for ToDo stuff
  7. SmartGit for the Versioning
  8. Jira and Freshdesk for Ticketing
  9. Mastonaut for Posting on Mastodon
  10. Whatsapp Desktop Client
  11. and Slack for keeping it al together

    Thing is – when I want to switch to „private“ mode and use the computer e.g. to make music with Logic Pro or Edit my Photos in Lightroom, there’s always the feeling of „work is too close“. I know I can have mulitple users on the machine, but it „feels“ work.

    Therefore I am thinking of adding a private machine, so that I can put the Macbook aside on weekends or holidays and „enjoy“ private computer time on… a mac mini? How is your setup? Do you have a separate machine for work and private stuff?

    Kategorien
    AI

    Should we use Generative AI like ChatGPT in Journalism, schools or communication?

    As it comes out, CNET has been using Generative or Assistive AI for month to create articles or better said, to „assist their authors“ in writing articles.

    Together with the recent announcement of Microsoft integration ChatGPT into their suits (namely Outlook etc.) we will step into an age of text not only being created by humans exclusively anymore. 

    In my opinion we will still find differences and nuances in solely AI generated texts and human generated text – but the assistive function of AI will have an impact on the style and especially length of texts, as long format will become vogue again due to a higher writing efficiency of AI Assistants. 

    When using Microsoft Githubs Copilot, my writing of code also increased, and I can imagine a similar effect when writing ChatGPT powered texts in e.g. Word. We already have that since years in the Google Search suggestion box, and we all love it, albeit this will expand to whole text blocks.

    Generative AI as an Assistant to enhance writing productivity – it compares to me like the calculator in school: I can calculate in my mind, but the calculator does it better and faster. Nonetheless I still need to figure out, what should be calculated.

    Kategorien
    Allgemein life

    Weniger Häme würde uns guttun

    Ronaldo verschießt Elfer gegen Österreich bei der EM 2016. Armer Kerl 😉

    Als ich vor ein paar Monaten von Twitter aus Frust zu Mastodon gewechselt bin, war ich sehr glücklich über die Gesprächskultur hier – ausgewogene Themen, lustig, interessant, fachlich, aber nie einseitig.

    Seit mehr und mehr Nutzer von Twitter, auch Follower von mir, zu Mastodon wechseln merke ich aber, dass sich die Kommunikationskultur ändert – und ich finde das total schade.

    Gestern wurde das Thema „Amthor // zur Löwen“ durchs „Dorf“ getrieben, heute ist der Auftritt von Wissing in einer Talkshow gestern Abend. Dominanter Duktus dabei: Häme und Spott. Und ich verstehe es nicht. Ich finde es unsäglich ermüdend, toxisch und  kleingeistig.

    Für Häme, sagt uns der Blick in die Definition, muss ein „Schadensfall (z. B. Unglücksfall, Verletzung, Missgeschick) beim Gegenüber eintreten. Dieser tritt ohne das Zutun des hämischen Orators ein. Der Schadensfall steht im Zusammenhang mit dem Spannungsverhältnis zwischen den beiden und stützt die Meinung des Hämikers. Ziel des hämischen Aktes ist die gefühlte Erhöhung der Machtposition des Hämikers auf Kosten des Gegenübers.“ 

    Wenn doch dadurch eine gefühlte Erhöhung der eigenen Position eintritt, dann heißt das, dass ich mich vorher unterlegen fühlte. Warum fühlen sich so viele Menschen anderen so unterlegen, dass sie meinen, durch Häme und Spott sich selber in eine andere Position bringen zu müssen? Wo ist das Selbstbewusstsein der Menschen geblieben, dass auf die vermeintlichen Fehler der anderen nicht mit Spott, sondern mit Mitgefühl oder im besten Fall Gleichgültigkeit reagiert. Ich möchte mir gar nicht vorstellen, was ein Shitstorm bei den „Opfern“ emotional auslöst, aber alleine an der Seitenlinie zu stehen und sich das mit anschauen zu müssen finde ich schrecklich. Und es vergiftet die Timeline und macht ein wunderbares Medium wie Mastodon  schwieriger.

    Ich möchte hier gar nicht von alten Tugenden oder so fabulieren, aber ich denke, die Frage nach Höflichkeit und Sanftmut, Gelassenheit und Kritikfähigkeit in der Kommunikation sollten wir uns bei einer neuen „Chance“ wie hier bei Mastodon stellen und hoffentlich so beantworten, dass wir die kleingeistigen, wirklich toxischen und beleidigenden Umgangsformen von Twitter endlich ablegen.

    Kategorien
    AI IT

    How does an AI Strategy fit into an IT and Business strategy?

    An AI strategy is a plan for how an organization will use artificial intelligence to achieve its goals. It fits into a business strategy by identifying specific business problems that AI can help solve, and outlining the steps that will be taken to implement AI solutions. The AI strategy also fits into an IT strategy by outlining the technology and infrastructure that will be needed to support the AI solutions.

    An AI Stratgey is part of an IT and Business Strategy

    Example Retail

    For example, a retail company may use AI to improve its customer service by implementing a chatbot that can answer customer questions and help them find products. In this case, the AI strategy would be a part of the company’s overall business strategy to improve customer satisfaction. The IT strategy would need to include the implementation of the necessary technology, such as the chatbot software, and the integration of the chatbot with the company’s existing systems.

    Example Healthcare

    Another example, a healthcare company may use AI to improve patient outcomes by developing predictive models that can identify patients at high risk of certain conditions. In this case, the AI strategy would be a part of the company’s overall business strategy to improve patient care. The IT strategy would need to include the implementation of the necessary technology, such as the predictive modeling algorithms and the necessary integration with the company’s existing systems.

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    Kategorien
    books holidays life reading

    Vorsätze 2023

    Gut, das mit den Vorsätzen klappt ja manchmal besser und mal schlechter – aber hier zumindest mal meine für das neue Jahr – unvollständig und ohne Gewähr auf Umsetzung :

    1. Maximal viel Family Time. Seit Anfang 2022 unser kleiner Sohn auf die Welt gekommen ist, sehe ich das Leben mit anderen Augen und die Prioritäten sind mal gerade um 180 Grad gedreht. Meine kleine Family ist mir das Wichtigste.
    2. Noch mehr „schöne“ Klassiker lesen. Zum Jahreswechsel bin ich bei Moby Dick angelangt, aber das ganze Tom Sawyer Zeugs, aber auch Agatha Christie nehm ich mir vermehrt vor. Eigentlich habe ich immer folgende Gewichtung der Bücher, die in meiner Kindle-Bibliothek ganz vorne stehen:
      1/3 Fachbuch (IT, Business Krams)
      1/3 Fiktionales (Klassiker, Romane, Krimis)
      1/3 Biografien
      Kann sein, dass das ein fantasievolleres Jahr wird.
    3. Meine 100 Push-ups / Tag weitermachen und ergänzen durch Sit-ups (ich bin über Monate Stück für Stück durch „Atomic-Habits“ zu den 100 gekommen und das hatte auch was mit meinen Rückenschmerzen zu tun)
    4. Ich hab mal wieder Bock auf einen Triathlon – kann aber nur ein kurzer werden, da siehe 1. 
    5. Mein Buch weiter / fertig schreiben. Die Idee dazu habe ich schon seit Jahren, auch die Struktur und die ersten beiden Kapitel. Ist ein Fachbuch über das, was ich in meinen IT und Business Jahren schon immer einmal loswerden wollte und immer nur gepredigt habe – jetzt wird’s verschriftlicht.
    6. Meine Blogging Frequenz hochhalten. Ich hab letztes Jahr wieder mehr angefangen zu schreiben, hier in Langform und drüben bei Mastodon. Mir macht es einfach Spaß und wenn ich mich richtig erinnere, dann wird mein Blog dieses Jahr 20 Jahre alt – herzlichen Glückwunsch, junger Hüpfer 
    7. Mehr vom Café aus arbeiten. Ich bin ja recht frei in der Arbeitsortswahl, habe aber im letzten Jahr das allererste Mal einen ganzen Tag aus vom Café aus gearbeitet (ja, war ein Starbucks, aber neben den Aktien liebe ich auch wirklich den Kaffee da). Das war irgendwie total produktiv und das möchte ich dieses Jahr mindestens noch einmal machen 
    8. Weiter so viel Reisen. Mit unserem Camper unterwegs sein und irgendwo einfach so anhalten können ist für mich das Größte.
    9. Mehr Fotografieren. Ist irgendwie weniger geworden und jetzt habe ich festgestellt, dass an meiner Canon die manuellen Programme nicht mehr tun – deswegen muss die eh zum Service und in dem Zusammenhang gönne ich mir vllt. mal wieder ein neues Objektiv. Fotografieren ohne Telefon finde ich einfach total kontemplativ und macht mir mega Spaß.
    10. Singen. Ja, siehe auch 1. – wir singen jeden Tag ab 5 Uhr morgens und ich packe die Gitarre dazu aus. So es drölftausend Kaffees dazu gibt, ist das die schönste Zeit des Tages 
    11. Mehr Musik machen. Ich habe in 2021 mal wieder einen Song komponiert und auch produziert – und das war unglaublich erfüllend und hat Spaß gemacht. Problem: das kostet massig Zeit, wenn es gut werden soll. Von daher – vllt. wird das eher dann was für 24 
    Kategorien
    AI business IT

    Prediction: in 2023 we will finally see the beginning of a wider business adoption of machine learning and AI Services – and here’s why

    At the end of 2022 ChatGPT made its way into the news and created a lot of fuzz.

    The reason was, that OpenAI, the company behind ChatGPT, developed a new frontend for its generative learning model GPT-3. GPT-3 was being released one year earlier and has already been the largest model ever created. It was only accessible by an API though, for which one needed to make it through a waiting list. ChatGPT changed the game as being an easy to use, free for everybody „chat“ interface to interact with the GPT-3. Many users for the first time understood, what Machine Learning, or AI Services, are capable of: they created poems, let ChatGPT write yet another StarWars movie script and many other funny things. But understanding the underlying achievements OpenAI was able to come up to, are nothing less than stunning – and will teach many businesses what benefits AI Services can bring.

    ChatGPT has made visible the potential that AI services have when they are skillfully combined, or the models that have technically been around for years are trained in a set with data that was previously unthinkable. GPT-3 contains about 10x as much data as previous models. More specifically, GPT-3 consists of multiple models and techniques like semi-supervised learning or trasnformers, that have been combined together intelligently – and that’s the fascinating part.

    Generally, until now, there were a number of „capabilities“ that an AI model brought to the table, e.g. the classics like sentiment analysis („What is the sentiment in a certain text?“) or classification („Is the text a question or a statement?“).
    This is now different: GPT-3 can not only do the above, but also learn new things very quickly with high efficiency and accuracy. This is called the Zero-, One- or Few-Shot capabilities of a model. Here GPT-3 achieves incredibly good values. This means, for example, that you can teach it to translate into a new language in just 3 „training sessions“, and from then on the model does it itself.

    Why this is so important for companies: the ability to (autonomously) learn and adapt.

    Every company claims to be unique. This may be the case in some areas, but often it is the cross-functional areas (IT, HR, Finance, etc.) that are essentially the same. The HR department of a bank does not do much different than the HR department of an automotive supplier. This also explains the success of the „general“ office products like Excel and Co. that are used in all companies (a spreadsheet like Excel, by the way, can be compared structurally well with an AI model). But WHAT is calculated in an Excel, that changes from company to company.
    Modern AI architectures like GPT-3 are now able to learn exactly this by themselves:
    1. what is my company specific data to work on?
    2. what are my company-specific questions that I should answer?
    3. what are my company-specific added values that I should deliver?

    These capabilities, which ChatGPT now presents to users in a very concrete way, are what will now drive the entry of AI into companies. Because the above results are simply „shocking“ in a positive sense.
    I see three areas in particular where we will see AI services much more often very soon:
    1. integrated AI: e.g. directly integrated in a software to make predictions (besipiel Salesforce AI service that directly qualifies a lead).
    2. standalone AI services (e.g. ChatBot that answers customer service questions on its own)
    3. generating AI services: Corporate communications, marketing copytexts, sales presentations that a service creates autonomously and is only approved or tuned afterwards by a „real“ employee.

    The productivity gains are enormous and the knowledge about the introduction of AI services, which skills and teams are needed, will also spread. Because one thing should be clear to everyone: AI Services are far more than a technical tool that can be introduced, but to an even much greater extent a corporate change than all „digitization measures“ combined. Digitization, compared, was a wet fart 

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    Kategorien
    AI IT

    Going Deeper: how to build and train your own models using neural networks with PyTorch or TensorFlow

    First of all, Deep learning is a subfield of machine learning that involves using neural networks to build models that can process and make predictions on data. These neural networks are typically composed of multiple layers, with the first layer receiving input data and each subsequent layer building on the previous one to learn increasingly complex representations of the data.

    Technically, deep learning models are trained by presenting them with large amounts of data and adjusting the model’s parameters to minimize a loss function, which measures the difference between the model’s predicted output and the correct output. This process is known as gradient descent, and it typically involves using algorithms such as backpropagation to compute the gradient of the loss function with respect to the model’s parameters.

    In contrast to machine learning, there’s no manual feature classification on the input data needed:

    In contrast to Machine Learning (ML), Features do not have to be marked manually in Deep Learning (DL). Deep Learning Algorithms are capable of identifying features themselves and identify this example as „house of Nikolaus“

    Here is an example of code for training a deep learning model using the PyTorch library:

    # Import the necessary PyTorch modules
    import torch
    import torch.nn as nn
    import torch.optim as optim
    
    # Define the neural network architecture
    class Net(nn.Module):
        def __init__(self):
            super(Net, self).__init__()
            self.fc1 = nn.Linear(10, 32)
            self.fc2 = nn.Linear(32, 64)
            self.fc3 = nn.Linear(64, 128)
            self.fc4 = nn.Linear(128, 10)
    
        def forward(self, x):
            x = self.fc1(x)
            x = nn.functional.relu(x)
            x = self.fc2(x)
            x = nn.functional.relu(x)
            x = self.fc3(x)
            x = nn.functional.relu(x)
            x = self.fc4(x)
            return x
    
    # Create an instance of the neural network
    net = Net()
    
    # Define the loss function and the optimizer
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=0.01)
    
    # Train the model
    for epoch in range(100):
        # Iterate over the training data
        for inputs, labels in train_data:
            # Clear the gradients
            optimizer.zero_grad()
    
            # Forward pass
            outputs = net(inputs)
    
            # Compute the loss and the gradients
            loss = criterion(outputs, labels)
            loss.backward()
    
            # Update the model's parameters
            optimizer.step()
    

    This code creates a neural network with four fully-connected (fc) layers, trains it on some training data using stochastic gradient descent (SGD), and optimizes the model’s parameters to minimize the cross-entropy loss. Of course, this is just a simple example, and in practice you would want to use more sophisticated techniques to train your deep learning models.

    A basic code example using TensorFlow to define and train a deep learning model may look like this:

    # Import necessary TensorFlow libraries
    import tensorflow as tf
    from tensorflow.keras import layers
    
    # Define the model architecture
    model = tf.keras.Sequential()
    model.add(layers.Dense(64, activation='relu'))
    model.add(layers.Dense(64, activation='relu'))
    model.add(layers.Dense(10, activation='softmax'))
    
    # Compile the model with a loss function and an optimizer
    model.compile(optimizer=tf.keras.optimizers.Adam(),
                  loss=tf.keras.losses.SparseCategoricalCrossentropy(),
                  metrics=['accuracy'])
    
    # Load the training data and labels
    train_data = ...
    train_labels = ...
    
    # Train the model on the training data
    model.fit(train_data, train_labels, epochs=5)
    

    In this code example, the first two lines import the necessary TensorFlow libraries for defining and training a model.

    The next three lines define the architecture of the model using the Sequential class and the Dense layer. The model has three dense layers with 64 units each, using the ReLU activation function for the first two layers and the softmax activation function for the final layer.

    The compile method is used to specify the loss function and optimizer for training the model. In this case, we are using the SparseCategoricalCrossentropy loss function and the Adam optimizer.

    Next, the training data and labels are loaded and the fit method is used to train the model on the data for 5 epochs. This will run the training process and update the model’s weights to improve its performance on the training data.

    Once the model is trained, it can be used to make predictions on new, unseen data. This can be done with the predict method, as shown in the following example:

    Copy code# Load the test data
    test_data = ...
    
    # Make predictions on the test data
    predictions = model.predict(test_data)
    

    In this code, the test data is loaded and passed to the predict method of the trained model. The method returns the predicted labels for the data, which can then be compared to the true labels to evaluate the model’s performance.

    PyTorch or Tensorflow?

    Whether you want to use PyTorch or Tensorflow for creating, training and asking your neural network, might be based on personal or usecase related preferences, but there are some subtle differences to it:

    1. Ease of use: PyTorch is generally considered to be more user-friendly than TensorFlow, particularly for tasks such as building and training neural networks. PyTorch provides a high-level interface for defining and training models, while TensorFlow can be more verbose and require more boilerplate code.
    2. Performance: TensorFlow is generally considered to be more efficient and scalable than PyTorch, particularly for distributed training and serving models in production. TensorFlow also has a number of tools and libraries for optimizing performance, such as the XLA compiler and TensorRT.
    3. Community: TensorFlow has a larger and more established community, with more resources and support available online. PyTorch is a newer framework and is rapidly growing in popularity, but it may not have as much support as TensorFlow.

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de

    Kategorien
    AI business IT

    How Tensorflow can help HR departments to streamline their processes

    TensorFlow is a powerful open-source tool funded by Google, that can help HR departments in a variety of ways. At its core, TensorFlow is a machine learning platform that allows users to build and train complex models using large amounts of data. This ability to process large amounts of data quickly and accurately makes TensorFlow an ideal tool for HR departments looking to improve their processes and make more informed decisions.

    Tensorflow and Recruiting

    One of the key ways that TensorFlow can help HR departments is by automating and improving the process of recruitment and selection. By training a model on large amounts of data (e.g. from SAP SuccessFactors, Workday etc.), HR departments can use TensorFlow to identify the most important factors in determining a successful candidate and automate the process of sifting through resumes and applications. This can save HR departments a significant amount of time and resources, and allow them to focus on other important tasks.

    TensorFlow and Performance Management

    Another area where TensorFlow can be useful for HR departments is in performance management. By training a model on data about an employee’s past performance, HR departments can use TensorFlow to identify patterns and trends that may indicate an employee’s potential for future success. This can help HR departments make more informed decisions about promotions, salary increases, and other important decisions related to employee performance.

    TensorFlow can also be used to improve the accuracy and fairness of salary and compensation decisions. By training a model on data about an employee’s past performance, job responsibilities, and other factors, HR departments can use TensorFlow to identify any potential biases or inconsistencies in their current compensation practices. This can help HR departments ensure that their compensation decisions are fair and based on objective criteria, and can help to prevent discrimination and other potential legal issues.

    TensorFlow and Reportings

    In addition to these specific applications, TensorFlow can also help HR departments in more general ways. For example, TensorFlow can be used to automate and improve the process of generating reports and analytics, which can help HR departments make more informed decisions about the effectiveness of their policies and practices. Additionally, TensorFlow can be used to identify potential issues and trends within an organization, such as high turnover rates or low employee satisfaction, and provide HR departments with the information they need to address these issues.

    TensorFlow to identify potential leaving employees

    Traditional methods of predicting employee turnover often rely on manual analysis of a small number of data points, such as employee performance reviews or exit interviews. This can be time-consuming and may not provide a complete picture of an employee’s likelihood of leaving the company.

    TensorFlow, on the other hand, can analyze vast amounts of data from various sources, including employee performance data, demographics, and other relevant factors. This allows HR departments to gain a more comprehensive view of an employee’s likelihood of leaving the company, enabling them to make more informed decisions about retention strategies. Traditional methods of predicting employee turnover may not be able to identify subtle patterns or trends that could be indicative of an employee’s likelihood of leaving the company. TensorFlow, on the other hand, can identify these patterns and trends, providing HR departments with valuable insights into the factors that may be contributing to employee turnover.

    From Re-Action to Action: act on an employee, before he leaves.

    One example of how TensorFlow can be used in the area of employee turnover prediction is through the development of a predictive model. This model could be trained using a large dataset of employee data, including factors such as performance metrics, demographics, and job satisfaction. The model could then be used to predict the likelihood of an individual employee leaving the company, based on the data provided: the model may identify that employees with low job satisfaction are more likely to leave the company. HR departments could then implement strategies to improve job satisfaction, such as offering training or career development opportunities, in an effort to reduce employee turnover.Another potential in the area of employee turnover prediction is through the development of an employee turnover dashboard. This dashboard could provide HR departments with a visual representation of employee turnover data, allowing them to easily identify trends and patterns. The dashboard could also provide HR departments with real-time alerts when an employee is at risk of leaving the company, allowing them to take immediate action to retain the employee.

    TensorFlow vs. Azure Cognitive Services in HR processes

    As stataed above, TensorFlow but also Azure Cognitive Services are both powerful tools for machine learning and artificial intelligence (AI) applications. While TensorFlow is an open-source library for machine learning and deep learning applications, Azure Cognitive Services is a suite of AI services provided by Microsoft. Both tools have their own advantages and disadvantages, which should be considered when deciding which to use for a particular project.

    One major advantage of TensorFlow is its flexibility. TensorFlow allows developers to build and train their own custom machine learning models, which can be tailored to specific applications and data sets. This flexibility can be particularly useful for complex projects that require specialized models or algorithms.

    Another advantage of TensorFlow is its ability to handle large amounts of data. TensorFlow is designed to scale to large data sets, allowing it to handle large volumes of data without sacrificing performance. This makes it ideal for projects that require the analysis of large amounts of data, such as natural language processing or image recognition.

    However, TensorFlow also has some disadvantages. One of the main disadvantages of TensorFlow is its complexity. TensorFlow is a powerful tool, but it can be difficult for beginners or unexperienced IT deaprtments to learn and use. In order to use TensorFlow effectively, developers need to have a strong understanding of machine learning algorithms and techniques, as well as experience with programming languages such as Python.

    In contrast, Azure Cognitive Services is a more user-friendly tool. Azure Cognitive Services provides pre-trained machine learning models that can be easily integrated into applications without the need for extensive programming knowledge. This makes it a good choice for developers who are new to machine learning or who want to quickly add AI capabilities to their applications.

    Another advantage of Azure Cognitive Services is its availability. Azure Cognitive Services is available as a cloud-based service, which means that developers can easily access and use the service without the need to install any software or hardware. This can be particularly useful for developers who are working on projects that require fast deployment or who do not have access to dedicated machine learning hardware.

    However, Azure Cognitive Services also has some disadvantages. One major disadvantage of Azure Cognitive Services is its cost. Azure Cognitive Services is a subscription-based service, which means that developers need to pay for the service on a monthly or annual basis. This can be expensive, especially for projects that require the use of multiple Azure Cognitive Services.

    Another disadvantage of Azure Cognitive Services is its lack of flexibility. Because Azure Cognitive Services provides pre-trained models, developers are limited to using the models that are provided by the service. This can be limiting for projects that require custom models or algorithms.

    In conclusion, TensorFlow and Azure Cognitive Services are both powerful tools for machine learning and AI applications. TensorFlow offers flexibility and the ability to handle large amounts of data, but it can be complex and difficult to use. Azure Cognitive Services is user-friendly and available as a cloud-based service, but it can be expensive and lacks flexibility. The best choice between the two will depend on the specific requirements of the HR project and the experience and expertise of the development team.

    In my company my-vpa.com, which basically is a HR Tech company, we mainly use Azure and AWS Comprehend for our HR processes. So for example we implememented an AI powered zero-touch recruiting process which is capable of recruiting up to 200 Assistants per month.

    Questions? Comments? Want to chat? Contact me on Mastodon,Twitter or send a mail to ingmar@motionet.de