[go: up one dir, main page]

Generative AI for Windows

View 1562 business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • The All-in-One Commerce Platform for Businesses - Shopify Icon
    The All-in-One Commerce Platform for Businesses - Shopify

    Shopify offers plans for anyone that wants to sell products online and build an ecommerce store, small to mid-sized businesses as well as enterprise

    Shopify is a leading all-in-one commerce platform that enables businesses to start, build, and grow their online and physical stores. It offers tools to create customized websites, manage inventory, process payments, and sell across multiple channels including online, in-person, wholesale, and global markets. The platform includes integrated marketing tools, analytics, and customer engagement features to help merchants reach and retain customers. Shopify supports thousands of third-party apps and offers developer-friendly APIs for custom solutions. With world-class checkout technology, Shopify powers over 150 million high-intent shoppers worldwide. Its reliable, scalable infrastructure ensures fast performance and seamless operations at any business size.
    Learn More
  • 1
    Satori

    Satori

    Enlightened library to convert HTML and CSS to SVG

    Enlightened library to convert HTML and CSS to SVG. Satori supports the JSX syntax, which makes it very straightforward to use. Satori will render the element into a 600×400 SVG, and return the SVG string. Under the hood, it handles layout calculation, font, typography and more, to generate a SVG that matches the exact same HTML and CSS in a browser. Satori only accepts JSX elements that are pure and stateless. You can use a subset of HTML elements (see section below), or custom React components, but React APIs such as useState, useEffect, dangerouslySetInnerHTML are not supported. Satori supports a limited subset of HTML and CSS features, due to its special use cases. In general, only these static and visible elements and properties that are implemented. Also, Satori does not guarantee that the SVG will 100% match the browser-rendered HTML output since Satori implements its own layout engine based on the SVG 1.1 spec.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    SentenceTransformers

    SentenceTransformers

    Multilingual sentence & image embeddings with BERT

    SentenceTransformers is a Python framework for state-of-the-art sentence, text and image embeddings. The initial work is described in our paper Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. You can use this framework to compute sentence / text embeddings for more than 100 languages. These embeddings can then be compared e.g. with cosine-similarity to find sentences with a similar meaning. This can be useful for semantic textual similar, semantic search, or paraphrase mining. The framework is based on PyTorch and Transformers and offers a large collection of pre-trained models tuned for various tasks. Further, it is easy to fine-tune your own models. Our models are evaluated extensively and achieve state-of-the-art performance on various tasks. Further, the code is tuned to provide the highest possible speed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Seq2seq Chatbot for Keras

    Seq2seq Chatbot for Keras

    This repository contains a new generative model of chatbot

    This repository contains a new generative model of chatbot based on seq2seq modeling. The trained model available here used a small dataset composed of ~8K pairs of context (the last two utterances of the dialogue up to the current point) and respective response. The data were collected from dialogues of English courses online. This trained model can be fine-tuned using a closed-domain dataset to real-world applications. The canonical seq2seq model became popular in neural machine translation, a task that has different prior probability distributions for the words belonging to the input and output sequences since the input and output utterances are written in different languages. The architecture presented here assumes the same prior distributions for input and output words. Therefore, it shares an embedding layer (Glove pre-trained word embedding) between the encoding and decoding processes through the adoption of a new model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Shap-E

    Shap-E

    Generate 3D objects conditioned on text or images

    The shap-e repository provides the official code and model release for Shap-E, a conditional generative model designed to produce 3D assets (implicit functions, meshes, neural radiance fields) from text or image prompts. The model is built with a two-stage architecture: first an encoder that maps existing 3D assets into parameterizations of implicit functions, and then a conditional diffusion model trained on those parameterizations to generate new assets. Because it works at the level of implicit functions, Shap-E can render output both as textured meshes and NeRF-style volumetric renderings. The repository contains sample notebooks (e.g. sample_text_to_3d.ipynb, sample_image_to_3d.ipynb) so users can try out text → 3D or image → 3D generation. The code is distributed under the MIT license, and includes a “model card” that documents limitations, recommended use, and ethical considerations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Simple, Secure Domain Registration Icon
    Simple, Secure Domain Registration

    Get your domain at wholesale price. Cloudflare offers simple, secure registration with no markups, plus free DNS, CDN, and SSL integration.

    Register or renew your domain and pay only what we pay. No markups, hidden fees, or surprise add-ons. Choose from over 400 TLDs (.com, .ai, .dev). Every domain is integrated with Cloudflare's industry-leading DNS, CDN, and free SSL to make your site faster and more secure. Simple, secure, at-cost domain registration.
    Sign up for free
  • 5
    Simple StyleGan2 for Pytorch

    Simple StyleGan2 for Pytorch

    Simplest working implementation of Stylegan2

    Simple Pytorch implementation of Stylegan2 that can be completely trained from the command-line, no coding needed. You will need a machine with a GPU and CUDA installed. You can also specify the location where intermediate results and model checkpoints should be stored. You can increase the network capacity (which defaults to 16) to improve generation results, at the cost of more memory. By default, if the training gets cut off, it will automatically resume from the last checkpointed file. Once you have finished training, you can generate images from your latest checkpoint. If a previous checkpoint contained a better generator, (which often happens as generators start degrading towards the end of training), you can load from a previous checkpoint with another flag. A technique used in both StyleGAN and BigGAN is truncating the latent values so that their values fall close to the mean. The small the truncation value, the better the samples will appear at the cost of sample variety.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    SneakGAN

    SneakGAN

    StyleGAN2-ADA trained on a dataset of 2000+ sneaker images

    StyleGAN2-ADA trained on a dataset of 2000+ sneaker images. This model was inspired by 98mprice's StyleGAN model and uses the same training data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Stable Diffusion in Docker

    Stable Diffusion in Docker

    Run the Stable Diffusion releases in a Docker container

    Run the Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint. Run the Stable Diffusion releases on Huggingface in a GPU-accelerated Docker container. By default, the pipeline uses the full model and weights which requires a CUDA capable GPU with 8GB+ of VRAM. It should take a few seconds to create one image. On less powerful GPUs you may need to modify some of the options; see the Examples section for more details. If you lack a suitable GPU you can set the options --device cpu and --onnx instead. Since it uses the model, you will need to create a user access token in your Huggingface account. Save the user access token in a file called token.txt and make sure it is available when building the container. Create an image from an existing image and a text prompt. Modify an existing image with its depth map and a text prompt.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    StudioGAN

    StudioGAN

    StudioGAN is a Pytorch library providing implementations of networks

    StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea. Moreover, StudioGAN provides an unprecedented-scale benchmark for generative models. The benchmark includes results from GANs (BigGAN-Deep, StyleGAN-XL), auto-regressive models (MaskGIT, RQ-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U). StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives. Each modularized option is managed through a configuration system that works through a YAML file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Swirl

    Swirl

    Swirl queries any number of data sources with APIs

    Swirl queries any number of data sources with APIs and uses spaCy and NLTK to re-rank the unified results without extracting and indexing anything! Includes zero-code configs for Apache Solr, ChatGPT, Elastic Search, OpenSearch, PostgreSQL, Google BigQuery, RequestsGet, Google PSE, NLResearch.com, Miro & more! SWIRL adapts and distributes queries to anything with a search API - search engines, databases, noSQL engines, cloud/SaaS services etc - and uses AI (Large Language Models) to re-rank the unified results without extracting and indexing anything. It's intended for use by developers and data scientists who want to solve multi-silo search problems from enterprise search to new monitoring & alerting solutions that push information to users continuously. Built on the Python/Django/RabbitMQ stack, SWIRL includes connectors to Apache Solr, ChatGPT, Elastic, OpenSearch | PostgreSQL, Google BigQuery plus generic HTTP/GET/JSON with configurations for premium services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 10
    Synthetic Data Vault (SDV)

    Synthetic Data Vault (SDV)

    Synthetic Data Generation for tabular, relational and time series data

    The Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table and timeseries datasets to later on generate new Synthetic Data that has the same format and statistical properties as the original dataset. Synthetic data can then be used to supplement, augment and in some cases replace real data when training Machine Learning models. Additionally, it enables the testing of Machine Learning or other data dependent software systems without the risk of exposure that comes with data disclosure. Underneath the hood it uses several probabilistic graphical modeling and deep learning based techniques. To enable a variety of data storage structures, we employ unique hierarchical generative modeling and recursive sampling techniques.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    TFKit

    TFKit

    Handling multiple nlp task in one pipeline

    TFKit is a tool kit mainly for language generation. It leverages the use of transformers on many tasks with different models in this all-in-one framework. All you need is a little change of config. You can use tfkit for model training and evaluation with tfkit-train and tfkit-eval. The key to combine different task together is to make different task with same data format. All data will be in csv format - tfkit will use csv for all task, normally it will have two columns, first columns is the input of models, the second column is the output of models. Plane text with no tokenization - there is no need to tokenize text before training, or do re-calculating for tokenization, tfkit will handle it for you. No header is needed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    TGAN

    TGAN

    Generative adversarial training for generating synthetic tabular data

    We are happy to announce that our new model for synthetic data called CTGAN is open-sourced. The new model is simpler and gives better performance on many datasets. TGAN is a tabular data synthesizer. It can generate fully synthetic data from real data. Currently, TGAN can generate numerical columns and categorical columns. TGAN has been developed and runs on Python 3.5, 3.6 and 3.7. Also, although it is not strictly required, the usage of a virtualenv is highly recommended in order to avoid interfering with other software installed in the system where TGAN is run. For development, you can use make install-develop instead in order to install all the required dependencies for testing and code listing. In order to be able to sample new synthetic data, TGAN first needs to be fitted to existing data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Texar-PyTorch

    Texar-PyTorch

    Integrating the Best of TF into PyTorch, for Machine Learning

    Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides a library of easy-to-use ML modules and functionalities for composing whatever models and algorithms. The tool is designed for both researchers and practitioners for fast prototyping and experimentation. Texar-PyTorch was originally developed and is actively contributed by Petuum and CMU in collaboration with other institutes. A mirror of this repository is maintained by Petuum Open Source. Texar-PyTorch integrates many of the best features of TensorFlow into PyTorch, delivering highly usable and customizable modules superior to PyTorch native ones. Texar-PyTorch (this repo) and Texar-TF have mostly the same interfaces. Both further combine the best design of TF and PyTorch. Data processing, model architectures, loss functions, training and inference algorithms, evaluation, etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Text Gen

    Text Gen

    Almost state of art text generation library

    Almost state of art text generation library. Text gen is a python library that allow you build a custom text generation model with ease. Something sweet built with Tensorflow and Pytorch(coming soon). Load your data, your data must be in a text format. Download the example data from the example folder. Tune your model to know the best optimizer, activation method to use.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    TextGen

    TextGen

    textgen, Text Generation models

    Implementation of Text Generation models. textgen implements a variety of text generation models, including UDA, GPT2, Seq2Seq, BART, T5, SongNet and other models, out of the box. UDA, non-core word replacement. EDA, simple data augmentation technique: similar words, synonym replacement, random word insertion, deletion, replacement. This project refers to Google's UDA (non-core word replacement) algorithm and EDA algorithm, based on TF-IDF to replace some unimportant words in sentences with synonyms, random word insertion, deletion, replacement, etc. method, generating new text and implementing text augmentation This project realizes the back translation function based on Baidu translation API, first translate Chinese sentences into English, and then translate English into new Chinese. This project implements the training and prediction of Seq2Seq, ConvSeq2Seq, and BART models based on PyTorch, which can be used for text generation tasks such as text translation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    TorchGAN

    TorchGAN

    Research Framework for easy and efficient training of GANs

    The torchgan package consists of various generative adversarial networks and utilities that have been found useful in training them. This package provides an easy-to-use API which can be used to train popular GANs as well as develop newer variants. The core idea behind this project is to facilitate easy and rapid generative adversarial model research. TorchGAN is a Pytorch-based framework for designing and developing Generative Adversarial Networks. This framework has been designed to provide building blocks for popular GANs and also to allow customization for cutting-edge research. Using TorchGAN's modular structure allows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Travesty

    Travesty

    Parody text generator

    A parody text generator. This is taken from the article published in BYTE Magazine in 1984. Literary critic Hugh Kenner and computer scientist Joseph O'Rourke introduced their text scrambler "Travesty" in an issue of BYTE magazine 1984. See the Wikipedia page for more information. The code has been mostly preserved, I've just added a GUI to make it easier to play around with the options and included a copy of Alice in Wonderland. A Windows binary is available on the releases page. Parody generators are computer programs which generate text that is syntactically correct, but usually meaningless, often in the style of a technical paper or a particular writer. They are also called travesty generators and random text generators. Their purpose is often satirical, intending to show that there is little difference between the generated text and real examples.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Video Diffusion - Pytorch

    Video Diffusion - Pytorch

    Implementation of Video Diffusion Models

    Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch. Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch. It uses a special space-time factored U-net, extending generation from 2D images to 3D videos. 14k for difficult moving mnist (converging much faster and better than NUWA) - wip. Any new developments for text-to-video synthesis will be centralized at Imagen-pytorch. For conditioning on text, they derived text embeddings by first passing the tokenized text through BERT-large. You can also directly pass in the descriptions of the video as strings, if you plan on using BERT-base for text conditioning. This repository also contains a handy Trainer class for training on a folder of gifs. Each gif must be of the correct dimensions image_size and num_frames.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    YData Synthetic

    YData Synthetic

    Synthetic data generators for tabular and time-series data

    A package to generate synthetic tabular and time-series data leveraging state-of-the-art generative models. Synthetic data is artificially generated data that is not collected from real-world events. It replicates the statistical components of real data without containing any identifiable information, ensuring individuals' privacy. This repository contains material related to Generative Adversarial Networks for synthetic data generation, in particular regular tabular data and time-series. It consists a set of different GANs architectures developed using Tensorflow 2.0. Several example Jupyter Notebooks and Python scripts are included, to show how to use the different architectures. YData synthetic has now a UI interface to guide you through the steps and inputs to generate structure tabular data. The streamlit app is available form v1.0.0 onwards.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    amrlib

    amrlib

    A python library that makes AMR parsing, generation and visualization

    A python library that makes AMR parsing, generation and visualization simple. amrlib is a python module designed to make processing for Abstract Meaning Representation (AMR) simple by providing the following functions. Sentence to Graph (StoG) parsing to create AMR graphs from English sentences. Graph to Sentence (GtoS) generation for turning AMR graphs into English sentences. A QT-based GUI to facilitate the conversion of sentences to graphs and back to sentences. Methods to plot AMR graphs in both the GUI and as library functions. Training and test code for both the StoG and GtoS models. A SpaCy extension that allows direct conversion of SpaCy Docs and Spans to AMR graphs. Sentence to Graph alignment routines FAA_Aligner (Fast_Align Algorithm), based on the ISI aligner code detailed in this paper. RBW_Aligner (Rule Based Word) for a simple, single token to single node alignment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    audio-diffusion-pytorch

    audio-diffusion-pytorch

    Audio generation using diffusion models, in PyTorch

    A fully featured audio diffusion library, for PyTorch. Includes models for unconditional audio generation, text-conditional audio generation, diffusion autoencoding, upsampling, and vocoding. The provided models are waveform-based, however, the U-Net (built using a-unet), DiffusionModel, diffusion method, and diffusion samplers are both generic to any dimension and highly customizable to work on other formats. Note: no pre-trained models are provided here, this library is meant for research purposes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    bert4keras

    bert4keras

    Keras implement of transformers for humans

    Our light reimplementation of bert for keras. A cleaner, lighter version of bert for keras. This is the keras version of the transformer model library re-implemented by the author and is committed to combining transformer and keras with as clean code as possible. The original intention of this project is for the convenience of modification and customization, so it may be updated frequently. Load the pre-trained weights of bert/roberta/albert for fine-tune. Implement the attention mask required by the language model and seq2seq. Pre-training code from zero (supports TPU, multi-GPU, please see pertaining). Compatible with keras, tf.keras.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    cerche

    cerche

    Experimental search engine for conversational AI such as parl.ai

    This is an experimental search engine for conversational AI such as parl.ai, large language models such as OpenAI GPT3, and humans (maybe).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    flat

    flat

    All-in-one image generation AI

    All-in-one image generation AI. Launch StableDiffusionWebUI with just a few clicks. No Python installation or repository cloning is required. Displays generated images in a list with information such as prompts. The image folder can be set freely.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    gpt-2-simple

    gpt-2-simple

    Python package to easily retrain OpenAI's GPT-2 text-generating model

    A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifically the "small" 124M and "medium" 355M hyperparameter versions). Additionally, this package allows easier generation of text, generating to a file for easy curation, allowing for prefixes to force the text to start with a given phrase. For finetuning, it is strongly recommended to use a GPU, although you can generate using a CPU (albeit much more slowly). If you are training in the cloud, using a Colaboratory notebook or a Google Compute Engine VM w/ the TensorFlow Deep Learning image is strongly recommended. (as the GPT-2 model is hosted on GCP) You can use gpt-2-simple to retrain a model using a GPU for free in this Colaboratory notebook, which also demos additional features of the package. Note: Development on gpt-2-simple has mostly been superceded by aitextgen, which has similar AI text generation capabilities with more efficient training time.
    Downloads: 0 This Week
    Last Update:
    See Project