/r/ethereum

Photograph via //r/ethereum

Next-generation platform for decentralised applications.

Dive in at ethereum.org


Welcome to r/Ethereum, the front page of web3.


Rules

  • No inappropriate behavior. This includes, but is not limited to: personal attacks, threats of violence, gossip, slurs of any kind, posting people's private information.
  • Keep price discussion and market talk, memes & exchanges to subreddits such as /r/ethfinance/ or /r/ethtrader
  • While posts on POS and staking are allowed, also see r/ethstaker
  • For deeper Ethereum dev discussion also see r/ethdev
  • Keep mining discussion to subreddits such as /r/EtherMining.
  • No duplicate threads.
  • No spamming or drive by posting.
  • No misleading titles.
  • No creating multiple accounts to get around Reddit rules.
  • English language only. Please provide accurate translations where appropriate.
  • Posts and comments must be made from an account at least 10 days old with a minimum of 20 comment karma. Exceptions may be made on a discretionary basis.
  • For a complete list of rules and an Ethereum getting started guide, click here.

Resources


/r/ethereum

2,880,733 Subscribers

1

Did there used to be a Coinbase Wallet shared recovery feature with Coinbase? 🔵🛟

I remember seeing a Coinbase Wallet feature a few years ago for the ability to enable shared recovery of a Coinbase Wallet with a Coinbase account as a part of the recovery process.

Was this removed? I think the feature used Shamir's secret sharing...

Much appreciated!

1 Comment
2024/04/07
02:10 UTC

8

Paying Eth in the Ethereum (ERC20) network

Im new to usign ETH and I was trying to pay something that had the warning text "Please note that this Ethereum wallet is on the Ethereum (ERC20) network".

I tried searching if this network was the "default" ETH network for an ETH wallet but the results about ERC20 got me even more confused.

6 Comments
2024/04/06
15:56 UTC

18

It seems like a no compromises staking pool is possible, why hasn’t it been made?

I would like to run a validator with less than 32 ether. Looks like my best bet would be something like rocketpool, but they and other pools require you to invest in their tokens. I could be wrong and invalidate my entire post, but it seems like a pool could be made without that compromise. If that is true, I wonder why one hasn’t been made yet since it would allow more validators to further secure the network and remove financial barriers from the only way to generate ether.

Is my thinking valid or am I misunderstanding something crucial?

22 Comments
2024/04/06
06:36 UTC

11

Which Multicoin wallet? (IP address tracking)

Due to the fact that most wallet providers or service providers can track the IP address, I would like to know which wallet you prefer to use to protect your privacy!?

11 Comments
2024/04/06
06:23 UTC

18

Statistic overview: Ethereum validators chart

Dencun upgrade draws peaky mountain ranges on the Ethereum validators chart 🏔️

EIP-7514 by Dapplion and Tim Beiko, implemented during the upgrade, introduced changes to the activation churn limit. The churn limit plays a crucial role, defining the number of validators that can exit (initiate staked ETH withdrawals) or enter (stake ETH) in a single epoch.

This update sets the number of validators that can join the chain at 8 per epoch, while the exit limit remains tied to the number of active validators on Ethereum.

https://preview.redd.it/8n6qkz6mdosc1.jpg?width=1497&format=pjpg&auto=webp&s=c9e56d8d42db85eb6b8250b9d78b0b8e907d40da

This approach safeguards the stability of the Ethereum network, guaranteeing that the validator set growth stays steady and that the chain's finality remains unaffected by many validators joining or leaving the network simultaneously.

The exit churn limit is dynamically calculated based on the network's current number of active validators.

Explore more about EIP-7514 and its background here.

The entry queue for staking ETH is much longer than the exit queue, indicating robust community interest in supporting the Ethereum network through staking ETH assets.

If you consider earning by staking your ETH, but unsure due to the 32 ETH requirement or have some other concerns, please explore various staking options with Everstake.

Start staking as little as 0.1 ETH or as much as you want with our non-custodial staking solution:

https://everstake.one/staking/ethereum

https://preview.redd.it/nef2x84vdosc1.jpg?width=885&format=pjpg&auto=webp&s=3581b8cb5922d7641dcf96c4e69b7e76f9b8dc2d

6 Comments
2024/04/05
15:01 UTC

2

Hosted Hardhat/Anvil Fork networks ??

I have been building a project where we provide cloud hosted fork networks like hardhat/anvil forks, as blockchain mainnet simulators for smart contract testing, ci-cd or dev/staging environments.

Need to know if there is a need for such a project in the market?
Will I be able to generate any revenue?
Will you use it. What network do you connect in your dev/staging versions of dapps?

3 Comments
2024/04/05
13:39 UTC

15

rETH on layer 2 - how to swap

I am holding some rETH on layer 2 (Arbitrum).
When I want to convert to ETH on uniswap or aave, I get a huge percentage loss.

Is there a way to transfer it to layer 1, and then swap? What is the most cost efficient way that I don't loose 2% on the trade? I am considering moving out of rETH, since this swap on Layer2 seems really bad instead of providing eth on Aave for example.

22 Comments
2024/04/04
13:11 UTC

2

Whats the best place to stake for flexibility and high APY ?

I got a nice stack of ETH laying around not sure what to do with it. Staking seems like the logical thing to do but i'm not a fan locking up my ETH for a measly 4% while the price moves that much in day. I'm looking for something flexible and *preferably* a higher APY. This is crypto after all even stocks give better ROI and are less volatile. Already tried Uniswap and other dex platforms that offer staking but don't like the risk of impermanent loss. Looking for a flexible (no month/year lock ups) and at least %10 APR. Any suggestions?

4 Comments
2024/04/03
22:02 UTC

4

DNS on Ethereum

I anyone has worked on implementing DNS on Ethereum (maybe not whole thing just small project that works) , can you share how you did it, or anyone know any GitHub repo or article that discusses on the topic please share. Thank you for you time.

8 Comments
2024/04/04
09:45 UTC

12

Ethereum: concise overview

Here's a handy cheat sheet to kickstart the understanding of Ethereum and its one-and-only token, ETH!

You'll find core information, key stats, a high-level roadmap, a short explanation of the consensus mechanism, and more. Also, there is a quick explanation of the Ethereum staking and its benefits.

Share this with friends just starting with ETH for a clear overview!

Stay tuned with Everstake for more interesting updates and overviews!

https://preview.redd.it/w3kzf65w4fsc1.png?width=1170&format=png&auto=webp&s=03a07910005daa16dcf16189cf02c0503251a8d0

0 Comments
2024/04/04
07:58 UTC

6

Cheapest Way to Self Custody ETH?

I have recently invested in a Trezor Safe 3 and want to self-custody my ETH from Binance. When withdrawing, can I send the ETH via an L2 to my Trezor? Or do I have to use L1? I have some confusion:

  1. I want to use the least fees possible, I see around $10 in fees on ERC20 and $0.1 in fees if I use an L2 like optimism, so here the L2 makes sense for me

  2. But can I transfer this to the ETH wallet on my Safe 3? Would I need another wallet on the optimism network?

  3. Once on the wallet can I convert back to regular ERC20 ETH or will the same $10 in fees come into play when doing that, in which case I might as well do the one transaction on ERC20 straight to the wallet?

  4. Is there any risk in holding ETH on an L2 like optimism if I'm going long term? Like hypothetically if something happens and optimism goes offline, would I lose all the ETH I have there?

UPDATE Thank you all for the advice, I decided to just eat the fees and transfer it over ERC20 itself so I could HODL with peace of mind. If anyone is doing the same, remember to always do a test transaction with the minimum amount first, allow it to process and confirm.

15 Comments
2024/04/04
07:07 UTC

0

Sent $9,000 worth of eth to eth classic on accident im so upset, this was on robinhood, it shows in etherscan, I need help please.

I need help, if you are some scam, I wont respond, if anyone knows how to get robinhoods help, please help me.

84 Comments
2024/04/04
05:41 UTC

11

Measuring engineering collaboration in a decentralised, permissionless and hostile environment

Hello, Ethereum subreddit community! As someone who's been in engineering leadership roles, I've always had this question bobbing around in my mind: how can we really measure an engineer's contribution to a project in a clear and fair way? My longstanding participation in the crypto space has only made this question loom larger, especially with the whole idea of trustless, permissionless environments. So, I've put together a little essay on this. I hope you find it interesting and that it can pop up some ideas on you too.

Intro

I've been particularly interested in a scenario where one could measure an individual contributor's engagement and contributions based solely on GitHub and the artifacts that this individual generates through their workflow. This includes actions such as code commits, pull requests, and code reviews particularly in a permissionless and decentralized environment.

In this setting, traditional management tools like Jira and commonly used DORA metrics including Cycle Time and Lead Time are often unavailable or impractical to implement. Analyzing the engineering metrics generated directly from the codebase allows us to measure an engineer's contributions and engagement in a clear, straightforward manner. Moreover, this approach is well-suited for open-source projects, where interactions are primarily centered around the codebase and there's limited scope for other forms of engagement.

No doubt many of you, like myself, have participated in organizations that employ tools like Jira to manage workflows and measure individual contributions, often using metrics such as story points. I can recall many instances throughout my career where I felt frustrated by such systems. In large organizations, it can be particularly challenging to gain visibility for the hard work of individual contributors. It's not uncommon for middle management to be introduced to identify top performers, a process that can often devolve into a game of social capital. This can lead to inefficient workflows and, in some cases, a politicized environment that detracts from the organization's main goals and the individual's experience.

Imagine this now: What if we could come up with a super precise way to measure who's doing what in these projects? It could be a game-changer. We might finally crack the code on how to make money from open source work. Communities could start selling products or services, and here's the best part - the money made could be shared out among everyone involved, based on how much they've contributed. This could totally revolutionize how we work together and create value in the open source community.

Clearly, a venture is more than just the sum of its engineering contributions. It's a complex ecosystem that includes multiple roles, such as designers, project managers, and marketers, each contributing to the overall success in unique ways. It's crucial to understand this holistic view of a venture. That being said, there is a subset of problems that companies are currently solving which could realistically be managed entirely by self-coordinated engineering teams. A prime example of this is open source software. While I believe that one day we will be able to deterministically measure all types of contributions, whether from a designer, project manager or marketer, the reality is that we are not quite there yet. However, measuring engineering contributions is a not-so-low hanging fruit that we can start with. This is not to diminish the importance of other roles, but rather to acknowledge the practicality and immediate feasibility of quantifying engineering impact.

AI x Crypto

While much of the tech world is buzzing about the potential of AI to boost engineering productivity and enhance developer tools, there is an equally important aspect that is often overlooked: the power of AI to measure that productivity and those contributions. This is a side of the coin that, unfortunately, tends to be dismissed or even frowned upon. Many people perceive such applications of AI as oppressive or a means to weaponize data at the expense of worker exploration. However, IMHO, this perspective fails to grasp the transformative potential of these technologies. Rather than being a tool for oppression, AI can serve as an enabler for true collaboration within a permissionless paradigm. It can unlock an entirely new way of working, lowering barriers to access capital and fostering collaboration among individuals who share the same ideas and are working towards a shared goal. By quantifying contributions, AI can provide an objective basis for recognizing and rewarding effort, thereby promoting a more equitable and inclusive work environment.

While the advent of Large Language Models (LLMs) has brought about significant advancements in AI, it's essential to recognize their limitations. LLMs are not well-suited for deterministic scenarios or strict logic, as they cannot reliably form a chain of logical conclusions. Their capacity to construct and follow complex logical sequences is limited, which can pose challenges in tasks requiring rigorous logical reasoning. However, where LLMs truly shine is in evaluating subjective matters such as code quality. Especially with the recent explosion of context tokens available on consumer-grade tools.

We'll be delving deeper into the topic of Large Language Models (LLMs) later on, but it's worth noting here that they can offer significant enhancements to existing research methodologies aimed at capturing the impact of contributions. LLMs bring in a non-deterministic and subjective aspect that is usually only created through human interaction, adding a new layer of richness to the analysis that goes beyond traditional programmatic approaches.

Another significant advantage of Large Language Models (LLMs) is their potential to help with abuse prevention, an area in which previous algorithms fell short. LLMs can be instrumental in detecting contributors who are merely "farming" points and not providing any meaningful contributions. This increased level of scrutiny can ensure that points are awarded based on genuine contributions, enhancing the fairness and integrity of the system.

Micro-DAOs, Collaborative SaaS.

For some of you, this concept of collaboration within a permissionless environment might ring a bell, especially when it comes to DAOs. I must admit, I have a love-hate relationship with DAOs. On one hand, the concept is absolutely phenomenal. Yet, on the other hand, the execution often leaves much to be desired. DAOs are a complex topic, and while I won't delve too deeply into it here, one thing is clear: the barrier of entry for newcomers to the ecosystem is high, particularly when it comes to creating DAOs. The considerations are numerous: from determining voting mechanisms and proposal acceptance criteria to designing the governance structure and the number of initial participants. Starting a new venture as a DAO often requires a significant upfront investment of time and resources simply to establish the functional framework. This complexity and resource intensity can deter many from taking the plunge into the DAO world.

Over recent years, there has been a remarkable advancement in the tools available for DAOs, making the process of creating one considerably less daunting than it was just four years ago. However, if you wish to establish something truly unique that mirrors your organization or community's values, you still need to develop your own smart contract, and it better be audited…

Nowadays, tools like Coordinape and Karma allow for a more streamlined management of DAO operations. However, the complexity of these operations still poses a significant challenge, especially for newcomers stepping into the web3 world. To some, it may seem an insurmountable task. The most effective approach, in this case, is to first participate in a DAO, to understand its complexities and intricacies. This will allow you to build a community around your idea and understand the parameters involved. Only after this you can consider creating a DAO that could generate value for your community or for yourself.

The duality of DAOs has always intrigued me. On one hand, their permissionless nature and purpose are aimed at lowering barriers to collaborative work, fostering a space for people to unite towards a common goal. However, in reality, the complexities inherent in DAOs often result in the opposite effect. The high barriers to entry, primarily due to their intricate nature, can deter many potential collaborators. What we need is a starting point that is accessible and requires little resources to explore. It must be capable of organic growth, both in terms of size and complexity, but only when necessary. We need a way for individuals to collaborate in a manner akin to their current work practices. If such a system could generate value for its participants, it should also possess the flexibility to adapt and transform according to the evolving phases of the organization, while providing financial tools to sell products and services from the beginning.

The status quo

I hope by now I've provided a compelling case for why this is an important problem to tackle, and how various types of organizations could stand to reap significant benefits from such a metric. Whether it's open-source projects, traditional corporations, or DAOs, there is a broad spectrum of entities that could find immense value in having a quantifiable measure of individual engineering impact. But now, we're getting to the most interesting part - how exactly can we achieve this?

Now, we are witnessing the emergence of initiatives like Open Source Observer, which are designed to provide meaningful data around the impact of projects. Pioneers such as Optimism RPGF and Gitcoin have paved the way to raise awareness about such impact metrics in the realm of public goods projects, pushing this matter into the mainstream conversation. While impact metrics for projects within an ecosystem, or within web3, are receiving increasing attention, we still need to highlight the immense value that impact metrics for individuals can bring to web3 and to society in general. The potential use cases are vast and we are just at the beginning of understanding what we can achieve with it.

This peculiar obsession of mine has led me down several different paths of research. Yet, one common factor amongst all the methods I've explored is their lack of real-world experimentation and productization. While the majority of the material and solutions are tucked away in the academic world, a few open-source projects are "almost" usable and provide a good framework to follow. However, at their core, most of these methods use the same raw metrics as a starting point:

  • Pull requests
  • Pull request reviews
  • Comments on issues
  • Comments on pull requests
  • Issues.

These metrics provide a fairly accurate picture of an engineer's engagement and impact on a project. On the other hand, metrics like commits or deployments might not be as straightforward or meaningful in this context. For instance, the number of commits a developer makes doesn't necessarily correlate with the quality or importance of their contributions. Similarly, deployments are often dependent on other factors beyond an individual engineer's control. Therefore, our focus should be on the metrics that most accurately and fairly represent an engineer's contributions to a project.

Here's where things start to get tricky. How do you actually measure that data? Do you simply sum up the number of pull requests of each engineer and compare them? But what is more valuable? An issue or a pull request review?

A real experiment

I stumbled upon a project called SourceCred in 2020, and they have created a great methodology to measure individual impact. There are a couple of academic papers that used SourceCred to analyze contributions, and similar approaches to the same problem, and although not popularized, it made big waves among collaboration enthusiasts when it was alive

Back in its early days, SourceCred received funding from Protocol Labs and was led by two Google engineers who had previously worked on PageRank and TensorFlow respectively. For those unfamiliar with PageRank, it's Google's proprietary algorithm used to measure the relevance of a website in relation to others based on a specific search query. The key element here is the concept of "relevance in comparison to others". This is the core principle that SourceCred adopted and applied to contributions. It constructed a graph where both contributions and contributors are represented as nodes. These nodes are interconnected with edges, the weights of which are determined based on the relevance of both the contributors and the contributions themselves.

There are several intriguing aspects of how this algorithm calculates the weights of these edges. Perhaps one of the most fascinating approaches is the inherent retroactivity of contributions. If a specific artifact references an old issue at a later date, the relevance of that issue increases based on the relevance of the artifact that referenced it. This retroactive approach ensures that contributions are valued not just in the moment they're made, but over the course of the project's lifespan, acknowledging the long-term impact of contributions.

https://preview.redd.it/1x5vp21vzcsc1.png?width=462&format=png&auto=webp&s=697e3b4ee5695c5950e1202dfd5eb2e6bc0bd363

However, it's important to note that SourceCred is not without its flaws. Notably, the system is susceptible to Sybil attacks, where an entity illegitimately inflates its influence by creating multiple identities. Additionally, SourceCred currently lacks the ability to account for qualitative metrics. While this might seem fair at a superficial level, it's clear that this approach falls short in capturing the nuance of real-world scenarios.

This is where the power of AI comes into play. With the recent explosion of capabilities in consumer-accessible Large Language Models (LLMs) and the increase in available context tokens, such as the recent release of Grok, we can successfully add qualitative heuristics to these contributions, taking into account the entire context of the project, and prevent or at least decrease the abuse potential. However, it's important to note that there is a limit to how much context an LLM can handle effectively. For instance, it is currently beyond the scope of any LLM to handle an enormous codebase like that of Google in its entirety... at least for now.

Unfortunately, SourceCred's journey was not all smooth sailing. The project stopped receiving funding from Protocol Labs and consequently, it faded away. An in-depth ethnography by Ellie Rennie provides an excellent account of what transpired during this period. In an ambitious leap, the SourceCred community sought to measure contributions from various platforms, including Discord, Discourse, and others. However, this approach proved too expansive and ultimately opened the door to abuse and exploitation. The algorithm was simply not equipped to handle contribution patterns from these diverse sources. This experience strengthens my belief that we should first strive to solve a small subset of the problem very well through experimentation before thinking about measuring other types of collaborations that are much less deterministic than code.

Despite the cessation of funding, a small community still surrounds SourceCred. However, active development contributions to the project are no longer ongoing. As a result, the SourceCred algorithm has become increasingly unfriendly for newcomers. The setup is complex, and there's a substantial journey involved in addressing compatibility issues and filling gaps in the documentation. Unfortunately, it's not an easy task to set it up and measure any contribution type, especially today as the project becomes more deprecated, given it's in Javascript instead of the more modern Typescript.

Conclusion

The possibilities here are endless, and there are no right or wrong answers. This is something we can only achieve through experimentation, constant iteration, and insightful feedback from people in a permissionless environment.

I've created an open-source tool to help people achieve just that: https://armitage.xyz

We're using SourceCred as a base and plan on expanding it with qualitative AI heuristics. Currently, we are in the process of iterating and exploring what those quality metrics might be. And hopefully receive funding to also modernize sourceCred to Typescript.

Although we are simply an MVP, the tool is already beneficial to some teams who are compensating their contributors using Armitage scores. We welcome contributions, and we are ourselves a meta-experiment, measuring contributions within our project for future rewards.

17 Comments
2024/04/04
00:46 UTC

3

Multi Sig question

Hi everyone,

There are two people given with two separate wallets. One holds shared funds that is being used to actively trade and swap on the ETH chain.

We are seeking a solution where these funds would be transferred to a multi sig wallett, where swaps and trades are allowed (either by one party or both) but withdrawal of any funds require sign off from both parties to go trough.

What is our best shot at this?

Thanks,
galaxyZ1

6 Comments
2024/04/03
21:47 UTC

43

Why do token launches still happen overwhelmingly on L1?

I just checked the new listings on CoinMarketCap and on the first page, I've found 28 Tokens launched on Ethereum, 2 on Polygon and 1 on Arbitrum. Why is it not worth it for devs to deploy their tokens on L2s, is it that much harder to do / do they just not trust L2s yet?

52 Comments
2024/04/03
20:29 UTC

1

Transfer from Bibox to Coinmerce taking over 3 weeks?

As seen on this screenshot I transfered 0.18 ETH to Coinmerce, however it is still stuck on the status "withdrawing". Anyone have any idea how to proceed? Thanks.

5 Comments
2024/04/03
18:20 UTC

1

Some more staking questions!

Hello, so I currently am deciding on how/where to stake my eth and future eth purchases and narrowed it down to the following 3 options:

  1. Stake on lido via ledger nano x

- this option seems pretty easy since I can use ledger live and do it all through there. I understand the cons of using closed source software and would mitigate some risk by spreading out my crypto on different hardware wallets that are not closed source. My question here would be - what are the fees typically associated with staking my eth/converting it to stETH? and back? I know that people say that it depends on gas fees of the network but is there a way for me to estimate say how much it would cost to stake 3-5 eth right now? And in the future I assume I would incur the same gas fees for converting stETH back to eth?

  1. stake on rocket pool

- From what I understand, I can do this using both the ledger and trezor but I have to potentially go through metamask? Or I can purchase the liquid token on an exchange and transfer it to my wallet? I'm trying to do this without incurring too many gas fees overall - which of the above options would be the best? I assume the fees to convert to rETH and back are the same as going from eth to stETH and back?

  1. Stake on coinbase - this one is the least likely option for me to take IMO, since they take 25% of your earnings? Do they at least not charge any fees for staking/unstaking? Any mandetory lockup period? Just really asking to learn more about the process - but I would heavily prefer option 1 or 2.

thanks!

2 Comments
2024/04/03
17:37 UTC

29

6 consecutive days of 21K blobs per day, totaling 106K blobs - 54% of them were from Inscriptions

39 Comments
2024/04/03
14:09 UTC

6

Staking Question

I currently have a modest amount of Eth staked on an exchange. I have recently heard about mini pool staking via rocket pool where you don't have to have 32 eth. My question is what are the pros and cons of doing this vs just staking on an exchange.

10 Comments
2024/04/03
12:21 UTC

0

Ethereum running out of addresses?

Maybe a dumb question but I've done some quick calculations regarding the potential address use in the future and it looks to me like Ethereum might run out of address space in about 116 years given the current growth rate.

Possible addresses: 2^(160)

Current unique addresses: 263,000,000 (https://etherscan.io/chart/address)

Avg. growth rate: ~120% / year (https://ycharts.com/indicators/ethereum\_cumulative\_unique\_addresses)

Projection at current growth rate: 263,000,000 * 2.2 ^ 116 = ~ 2^(160)

What am I missing?

60 Comments
2024/04/03
08:54 UTC

6

🦏 Rhino Review - Ethereum Staking Journal #24 is out!

Read here: https://rhinoreview.substack.com/p/rhino-review-ethereum-staking-journal-93b

Rhino Review - Ethereum Staking Journal supported by EthStaker.
The EthStaker community deserves immense recognition for their unwavering support. Kudos to their dedication and collaborative spirit!

1 Comment
2024/04/03
07:27 UTC

1

simplifying crypto payments, i have been building this project for all types of business who wants to accept crypto as payments

0 Comments
2024/04/03
07:01 UTC

15

Suggest a book like “The Infinity Machine”

I recently bought Camila Russo’s The Infinity Machine book which is a spectacular story about early days of Ethereum. I’m closing to its end and looking for such books to keep reading more about Ethereum and blockchain technologies.

I don’t want to read some scummy guys “how can you make it on blockchain” kimda books, or investment strategies so please skip if you are going to suggest.

I only seek for a quality story books about blockchains which also touches its lore, tech and philosophy. Do you have any suggestions?

18 Comments
2024/04/02
23:14 UTC

1

Does this contract not allow sells/swaps?

Was trying to buy a new AI coin that dropped today and accidentally bought the wrong one.

For some reason, it appears this coin I bought is only allowing buys and not sells. I have failed 3 times so far trying to swap it on uniswap. I've even upped the slippage to 30% for shits n giggles and it still fails.

Anyone smarter than me able to take a look and see if anything weird stands out?

https://etherscan.io/txs?a=0x9dc6e3d28069234e03d57ecacd6e23c483587807

9 Comments
2024/04/02
22:13 UTC

6

Anyone have experience with enterprise versions of trading APIs (Ox, 1inch, etc)

I'm working on integrating swapping of Eth based assets into a large web2 app...

We're looking into integrating one of the large aggregator providers APIs to get up and running quickly. So far we've just found 0x and 1inch.

Does anyone have experience with the enterprise versions of their plans? How have you found performance? Support? What pricing do they use for the enterprise plans? Any other providers we should be looking into?

Thanks!

2 Comments
2024/04/02
18:01 UTC

Back To Top