noiseGPT noiseGPT - decentralized AI

Token and network mechanics

The objective of noiseGPT is to create a decentralized network of GPU nodes running AI inference models. The model underneath our token economics revolves around a single token, noiseGPT (or sometimes referred to as NGPT), that will serve as both an incentive for GPU nodes to contribute computing power to the network, and as a medium of exchange for users (AI inference requestors).
Additionally the token will be used for governance and discounts all throughout the ecosystem. The token plays a crucial role in delivering upon the underlying philosophy of noiseGPT: to stay completely free from hidden built-in biases and censorship. As such the token will eventually be chain agnostic to remove the last dependencies on any single entity: this just simply means the token will be present on multiple chains.

Network Architecture and Mechanics

Individuals can join the network by downloading node software, which allows them to contribute their GPU's computational power to the network. Nodes can stake noiseGPT to increase their chances of being selected to fulfill a request. The selection algorithm for which node gets to fulfill a request is a weighted lottery, with each node's chance of being chosen proportional to the amount of noiseGPT it has staked. Additional factors are the reliability of the node over a previous time window as well as the normalized inference time for this node.

\[ P_i = f(x_{wi})\cdot \frac{S_i \cdot \frac{R_i}{T_{it}} }{\sum_{j=1}^{N} S_j \cdot \frac{R_j}{T_{jt}}} \]

\[ \begin{align*} P_{i} & \text{ being the chance of node } i \text{ being assigned the job} \\ R_{i} & \text{ being the reliability of node } i\\ T_{it} & \text{ as the average inference time for node } i \text{ for time period } t\\ \\ \end{align*} \]

An additional function \(f\) makes sure all nodes that meet certain criteria get a minimum number of requests to fulfill, to ensure that \(R\) and \(T\) make sense.


At genesis, there will be 1 billion noiseGPT minted. In order to incentivize adoption and decentralization, guaranteed emissions will be sent on a periodic basis until there are 100 operational nodes globally. The amount of emissions delivered to these nodes is again dependent on their reliability. Once the threshold of 100 nodes globally is reached, the AI inference service will be incentivized by organic usage fees and extra fees offered by applications built on top of noiseGPT.
Full token supply will be launched fairly, with no team allocation, private investors or locked up tokens. A 5% transaction kickback will ensure runway for development as well as facilitate initial incentivization of new nodes and liquidity providers. This kickback or 'tax' will be slowly phased out. Holders that do not run a node, can opt to stake their tokens and assign them to a node operator (and potentially earn yield by doing so, if the node operator chooses to share some of their rewards).


Although noiseGPT came out with a life-like TTS engine, this is definitely not what the models will be limited to, as can been seen in the roadmap. A decentralized network of AI models and inference GPUs offers a wide range of applications with significant revenue-generating potential. One such application that one can already see is the development of Twitter bots that leverage AI models to automate tasks such as content generation, sentiment analysis, or recommendation systems. Beyond Twitter bots, there are numerous other exciting possibilities for revenue-generating apps. For instance, AI-powered chatbots can be deployed across various platforms to provide customer support and enhance user experiences. Additionally, personalized recommendation systems can be built for e-commerce platforms, enabling targeted product suggestions and increasing sales conversion rates.

Furthermore, AI models integrated into autonomous trading systems can optimize investment strategies and generate profits in financial markets. AI-based fraud detection systems can help identify and prevent fraudulent activities in sectors such as banking, insurance, and cybersecurity. Another revenue-generating application is the development of AI-driven content curation platforms that provide tailored news, articles, and entertainment to users. AI-powered virtual assistants can offer personalized and intelligent assistance, revolutionizing tasks such as scheduling, reminders, and information retrieval. We also predict a stark rise in chatbots tailored to specific niche expertises/industries. Think of a virtual real estate broker, doctor or psychologist. We will allow projects to use our API to ensure they can offer a solution completely independent of OpenAI, Google or Microsoft.

In this decentralized network, stakers play a crucial role by allocating their tokens to specific application nodes. To incentivize stakers, these revenue-generating applications can share a portion of their generated revenue with the stakers who have allocated their tokens to the respective application node. This mechanism ensures a symbiotic relationship between the stakers and the applications, encouraging participation and supporting the growth and sustainability of the decentralized AI ecosystem.

Price modeling

The price of noiseGPT will be completely determined by market forces. It is however possible to estimate the price of AI inference costs based on various factors and variables. It is important to note that this model does not directly correlate with the actual trading price of our token. Onceagain: The market price of the token is solely determined by the forces of supply and demand within the cryptocurrency market. However, despite the distinction between the model's estimation and the token's trading price, we find value in sharing this formula/model as a guidance tool. By providing insights into the factors influencing AI inference costs, we can offer users and stakeholders a framework to understand the potential pricing dynamics and make informed decisions. This transparency fosters trust and empowers individuals to assess the cost-efficiency of utilizing AI inference services, enabling them to optimize their resource allocation and maximize the benefits derived from our platform. Ultimately, our goal is to provide valuable information to the community and facilitate the development of a thriving ecosystem around our token.

\[ \begin{align*} H_{t} & \text{ as the hardware cost at time } t \\ E_{t} & \text{ as the electricity cost at time } t \\ N_{t} & \text{ as the number of nodes in the network at time } t \\ D_{t} & \text{ as the demand at time } t \\ T_{t} & \text{ as the technology factor at time } t \text{ (this represents the cost-efficiency of computation)} \\ C & \text{ as a constant scaling factor} \end{align*} \]

Then the price \(P_{t}\) at time \(t\) could be modeled as:

\[ P_{t} = C \cdot \frac{D_{t} \cdot (H_{t} + E_{t})}{T_{t} \cdot N_{t}} \]

We can further express \(D_{t}\), \(H_{t}\), \(E_{t}\), \(T_{t}\), and \(N_{t}\) as growing or decreasing exponentially. Let's say:

\[ \begin{align*} D_{t} & = D_0 \cdot e^{a \cdot t}, \text{ where } D_0 \text{ is the initial demand, } a \text{ is the rate of increase in demand} \\ H_{t} & = H_0 \cdot e^{-c \cdot t}, \text{ where } H_0 \text{ is the initial hardware cost, } c \text{ is the rate of decrease in hardware cost} \\ E_{t} & = E_0 \cdot e^{-d \cdot t}, \text{ where } E_0 \text{ is the initial electricity cost, } d \text{ is the rate of decrease in electricity cost} \\ T_{t} & = T_0 \cdot e^{b \cdot t}, \text{ where } T_0 \text{ is the initial technology factor, } b \text{ is the rate of technology improvement} \\ N_{t} & = N_0 \cdot e^{n \cdot t}, \text{ where } N_0 \text{ is the initial number of nodes, } n \text{ is the rate of increase in nodes} \end{align*} \]

Substituting these into the price formula, we get:

\[ P_{t} = C \cdot \frac{D_0 \cdot e^{a \cdot t} \cdot (H_0 \cdot e^{-c \cdot t} + E_0 \cdot e^{-d \cdot t})}{T_0 \cdot e^{b \cdot t} \cdot N_0 \cdot e^{n \cdot t}} \]


\[ P_{t} = C \cdot \frac{D_0 \cdot (H_0 + E_0)}{T_0 \cdot N_0} \cdot e^{(a - b - n + c + d) \cdot t} \]

This model assumes that demand for GPU computation time, hardware cost, electricity cost, technology advancement, and number of nodes all follow exponential growth or decay, with technology and network size acting to reduce prices, and demand, hardware and electricity costs acting to increase them.

noiseGPT Token Disclaimer
The noiseGPT token is a utility token to be used solely within the noiseGPT decentralized AI inference network. It is not intended to be an investment or a security of any kind. The purchase, ownership, receipt, or possession of noiseGPT tokens carries no rights, express or implied, other than the right to use such tokens as a means to enable usage of, participation in, and interaction with services on the noiseGPT network. noiseGPT tokens are not shares, securities, or equivalent interests. They do not entitle you to ownership or governance rights in the noiseGPT network, any entity, or the software. noiseGPT tokens do not represent equity, shares, royalties, rights to capital, profit, returns, or income in the network or any other entity or software. The noiseGPT token is not a digital currency, commodity, or any other kind of financial instrument and has not been registered under the United States Securities Act of 1933, the securities laws of any state of the United States or the securities laws of any other country, including the securities laws of any jurisdiction in which a potential token holder is a resident. We expressly disclaim any and all responsibility for any direct or consequential loss or damage of any kind whatsoever arising directly or indirectly from the purchase, ownership, receipt, or possession of noiseGPT tokens.