Connect with us

Blockchain

AI and Efficiency

Published

on

AI and Efficiency

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.

Read Paper

Algorithmic improvement is a key factor driving the advance of AI. It’s important to search for measures that shed light on overall algorithmic progress, even though it’s harder than measuring such trends in compute.

44x less compute required to get to AlexNet performance 7 years later

Total amount of compute in teraflops/s-days used to train to AlexNet level performance. Lowest compute points at any given time shown in blue, all points measured shown in gray.

Download charts

Measuring efficiency

Algorithmic efficiency can be defined as reducing the compute needed to train a specific capability. Efficiency is the primary way we measure algorithmic progress on classic computer science problems like sorting. Efficiency gains on traditional problems like sorting are more straightforward to measure than in ML because they have a clearer measure of task difficulty.[1] However, we can apply the efficiency lens to machine learning by holding performance constant. Efficiency trends can be compared across domains like DNA sequencing (10-month doubling), solar energy (6-year doubling), and transistor density (2-year doubling).

For our analysis, we primarily leveraged open-source re-implementations to measure progress on AlexNet level performance over a long horizon. We saw a similar rate of training efficiency improvement for ResNet-50 level performance on ImageNet (17-month doubling time). We saw faster rates of improvement over shorter timescales in Translation, Go, and Dota 2:

  1. Within translation, the Transformer surpassed seq2seq performance on English to French translation on WMT’14 with 61x less training compute 3 years later.
  2. We estimate AlphaZero took 8x less compute to get to AlphaGoZero level performance 1 year later.
  3. OpenAI Five Rerun required 5x less training compute to surpass OpenAI Five (which beat the world champions, OG) 3 months later.

It can be helpful to think of compute in 2012 not being equal to compute in 2019 in a similar way that dollars need to be inflation-adjusted over time. A fixed amount of compute could accomplish more in 2019 than in 2012. One way to think about this is that some types of AI research progress in two stages, similar to the “tick tock” model of development seen in semiconductors; new capabilities (the “tick”) typically require a significant amount of compute expenditure to obtain, then refined versions of those capabilities (the “tock”) become much more efficient to deploy due to process improvements.

Increases in algorithmic efficiency allow researchers to do more experiments of interest in a given amount of time and money. In addition to being a measure of overall progress, algorithmic efficiency gains speed up future AI research in a way that’s somewhat analogous to having more compute.

Other measures of AI progress

In addition to efficiency, many other measures shed light on overall algorithmic progress in AI. Training cost in dollars is related, but less narrowly focused on algorithmic progress because it’s also affected by improvement in the underlying hardware, hardware utilization, and cloud infrastructure. Sample efficiency is key when we’re in a low data regime, which is the case for many tasks of interest. The ability to train models faster also speeds up research and can be thought of as a measure of the parallelizability of learning capabilities of interest. We also find increases in inference efficiency in terms of GPU time, parameters, and flops meaningful, but mostly as a result of their economic implications[2] rather than their effect on future research progress. Shufflenet achieved AlexNet-level performance with an 18x inference efficiency increase in 5 years (15-month doubling time), which suggests that training efficiency and inference efficiency might improve at similar rates. The creation of datasets/​environments/​benchmarks is a powerful method of making specific AI capabilities of interest more measurable.

Primary limitations

  1. We have only a small number of algorithmic efficiency data points on a few tasks. It’s unclear the degree to which the efficiency trends we’ve observed generalize to other AI tasks. Systematic measurement could make it clear whether an algorithmic equivalent to Moore’s Law[3] in the domain of AI exists, and if it exists, clarify its nature. We consider this a highly interesting open question. We suspect we’re more likely to observe similar rates of efficiency progress on similar tasks. By similar tasks, we mean tasks within these sub-domains of AI, on which the field agrees we’ve seen substantial progress, and that have comparable levels of investment (compute and/or researcher time).
  2. Even though we believe AlexNet represented a lot of progress, this analysis doesn’t attempt to quantify that progress. More generally, the first time a capability is created, algorithmic breakthroughs may have reduced the resources required from totally infeasible[4] to merely high. We think new capabilities generally represent a larger share of overall conceptual progress than observed efficiency increases of the type shown here.
  3. This analysis focuses on the final training run cost for an optimized model rather than total development costs. Some algorithmic improvements make it easier to train a model by making the space of hyperparameters that will train stably and get good final performance much larger. On the other hand, architecture searches increase the gap between the final training run cost and total training costs.
  4. We don’t speculate[5] on the degree to which we expect efficiency trends will extrapolate in time, we merely present our results and discuss the implications if the trends persist.

Measurement and AI policy

We believe that policymaking related to AI will be improved by a greater focus on the measurement and assessment of AI systems, both in terms of technical attributes and societal impact. We think such measurement initiatives can shed light on important questions in policy; our AI and Compute analysis suggests policymakers should increase funding for compute resources for academia, so that academic research can replicate, reproduce, and extend industry research. This efficiency analysis suggests that policymakers could develop accurate intuitions about the cost of deploying AI capabilities—and how these costs are going to alter over time—by more closely assessing the rate of improvements in efficiency for AI systems.

Tracking efficiency going forward

If large scale compute continues to be important to achieving state of the art (SOTA) overall performance in domains like language and games then it’s important to put effort into measuring notable progress achieved with smaller amounts of compute (contributions often made by academic institutions). Models that achieve training efficiency state of the arts on meaningful capabilities are promising candidates for scaling up and potentially achieving overall top performance. Additionally, figuring out the algorithmic efficiency improvements are straightforward[6] since they are just a particularly meaningful slice of the learning curves that all experiments generate.

We also think that measuring long run trends in efficiency SOTAs will help paint a quantitative picture of overall algorithmic progress. We observe that hardware and algorithmic efficiency gains are multiplicative and can be on a similar scale over meaningful horizons, which suggests that a good model of AI progress should integrate measures from both.

Our results suggest that for AI tasks with high levels of investment (researcher time and/or compute) algorithmic efficiency might outpace gains from hardware efficiency (Moore’s Law). Moore’s Law was coined in 1965 when integrated circuits had a mere 64 transistors (6 doublings) and naively extrapolating it out predicted personal computers and smartphones (an iPhone 11 has 8.5 billion transistors). If we observe decades of exponential improvement in the algorithmic efficiency of AI, what might it lead to? We’re not sure. That these results make us ask this question is a modest update for us towards a future with powerful AI services and technology.

For all these reasons, we’re going to start tracking efficiency SOTAs publicly. We’ll start with vision and translation efficiency benchmarks (ImageNet[7] and WMT14), and we’ll consider adding more benchmarks over time. We believe there are efficiency SOTAs on these benchmarks we’re unaware of and encourage the research community to submit them here (we’ll give credit to original authors and collaborators).

Industry leaders, policymakers, economists, and potential researchers are all trying to better understand AI progress and decide how much attention they should invest and where to direct it. Measurement efforts can help ground such decisions. If you’re interested in this type of work, consider applying to work at OpenAI’s Foresight or Policy team!

Algorithmic efficiency SOTAs

Submit on GitHub

Source: https://openai.com/blog/ai-and-efficiency/

Blockchain

Bitcoin Difficulty Ribbon Could Indicate Imminent Price Increase

Published

on

One is called the difficulty ribbon, and it has just broken out of the green buy zone for the first time since March in terms of compression. The metric was reported by analytics provider Glassnode, which added that historically, these had been periods characterized by a positive momentum indicating significant price increases.

Historical Bitcoin Buy Signal

The Bitcoin difficulty ribbon was created by chartist Willy Woo. It consists of simple moving averages of network difficulty enabling the rate of change of difficulty to be easily seen. Periods of high ribbon compression, such as the current situation, have been historically good buying opportunities.

There have been several significant price increases over Bitcoin’s lifespan that followed this ribbon compression breaking out of the green zone. The most recent was around April 2019 when BTC prices surged from below $5k to top out over $13k just three months later.

It was also observed that there had been a massive divergence in difficulty ribbon compression and Bitcoin price over the past six years. However, the chart has used a logarithmic price chart, which may have caused that anomaly.

Bitcoin’s hash ribbon is a similar metric, and CryptoPotato reported that it was flashing buy signals back in July. In the five weeks that followed, BTC price surged 34% to make its 2020 high.

You Might Also Like:

BTC Price Action Update

Looking at the shorter term, Bitcoin’s price chart has just printed another ‘Bart Simpson’ pattern with a sharp 2.3% decline in just over an hour, wiping Monday’s gains.

Prices had recovered to $10,725 at the time of writing, and sentiment appears to be bullish for BTC, according to a recent poll by analyst and trader Josh Rager.

Bitcoin is currently trading right on the 50-day moving average, which is acting as resistance at the moment. The next step above this is a break above $11k, while on the low side, there is strong support at the $10k level. Analyst ‘CryptoHamster’ added:

“After the breakout the resistance line became support. Now it is getting tested. If it holds, it would be a very nice sign. But it has to hold, otherwise the whole growth is just a short squeeze.”

Short term charts suggest price could go either way, but longer-term on-chain analytics, such as the difficulty ribbon, are more bullish.

SPECIAL OFFER (Sponsored)

Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited – first 200 sign-ups & exclusive to CryptoPotato).

Click here to start trading on BitMEX and receive 10% discount on fees for 6 months.


Source: https://cryptopotato.com/bitcoin-difficulty-ribbon-could-indicate-imminent-price-increase/

Continue Reading

Blockchain

Bulgarian National Convicted For His Role in a Bitcoin-Related Crypto Exchage Scam

Published

on

The owner of a cryptocurrency exchange has been recently convicted in a transnational scheme of defrauding people through an online auction fraud. Court says the scam reached a multi-million dollar scale.

At Least 900 Americans Victimized

As per a recent report, people who suffered from the fraud were probably more than 900 American citizens. According to the official statement, 53-years-old Rossen Iossifov, formerly of Bulgaria and reported owner of a Bulgaria-based Bitcoin exchange R.G. Coins, was convicted of both conspiracy to commit racketeering and money laundering. After a two-week trial, the jury in Frankfort, Kentucky and U.S. District Judge Robert E. Wier scheduled the sentencing to Jan 12, 2021.

Reportedly, some of the Romania-based members of the group posted a false advertisement to promote an online auction and sales websites, among which Craigslist and eBay. The ad promised its victims high-cost goods (typically vehicles) that did not exist.

As per the release, members of the scam would use stolen identities to promote and convince their victims to send money for the advertised items via “persuasive narratives”. For example, some of the ads had impersonated a military member in need of selling the advertised item before deployment.

The scammers also provided invoices with trademarks of reputable companies to their victims, making the transactions seem legit. The legal document also reveals that members of the conspiracy set up call centers, offering customer support. This way they would provide advice to client questions and “alleviate concerns over the advertisements”.

You Might Also Like:

Converting The Stolen Funds Into Crypto Assets

According to the official statement, once Iosiffov received the victims’ funds, he and his fellows would convert them into crypto assets and transfer them to off-shore money launderers.

As per the court documents, “since at least September 2015 to December 2018, the Bulgarian exchanged crypto assets into local fiat currency on behalf of his Romania-based partners in the scam, knowing that Bitcoin presented the proceeds of illegal activity.”

According to the court statement, in just two and a half years, Iossifov exchanged more than $4.9 million worth of Bitcoin for only four of the members of the criminal team.

A total of seventeen defendants have been convicted in the case. Three others are fugitives. Police departments in the U.S. and Romania have led the procedures on the case.

It’s worth noting that the US DOJ is becoming increasingly active in pursuing crypto-related fraud. As CryptoPotato reported earlier, it went after 280 cryptocurrency accounts related to hackers from North Korea.

SPECIAL OFFER (Sponsored)
Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited – first 200 sign-ups & exclusive to CryptoPotato).

Click here to start trading on BitMEX and receive 10% discount on fees for 6 months.


Source: https://cryptopotato.com/bulgarian-national-convicted-for-his-role-in-a-bitcoin-related-crypto-exchage-scam/

Continue Reading

Blockchain

Aave Governance is Now on Mainnet: Incoming 100:1 Token Split For LEND?

Published

on

The Aave Money Market DeFi protocol is now about to be more decentralized than ever. In an announcement published on its official blog on 25th September 2020, the team reported on the successful launch of the Aave Governance on the mainnet.

This means that users of the protocol will now be able to vote on critical decisions for the project’s future. As Aave explains, the governance implementation was active in the Kovan and Ropsten testnets, giving users the ability to experiment with how to participate in the voting process for various improvement proposals, known as AIP.

Bye LEND Welcome AAVE tokens: Aave’s First Proposal

Unlike testnet implementations, the effects of launching governance on the mainnet are now more formal and reflect the commitment of the development team to empowering the community. The first AIP for which users will be able to vote involves a token migration and the total supply reconversion.

Should the first AIP be approved, the LEND tokens will become AAVE tokens, and the total supply will drop by 100:1. While this may give the impression of less liquidity, the reality is that the value or market cap should, in theory, remain the same as the AAVE holdings in the owners’ wallets would also decrease proportionally.

You Might Also Like:

“The major benefits of the migration is that we activate the Safety Module, meaning that AAVE token holders can stake their AAVE and earn,” told us Aave CEO, Stani Kulechov.

These modules are created to secure the protocol and would be used to recapitalize the platform in case of a deficit.

Will exchanges support the Lend – Aave migration? According to Kulechov, “Major exchanges will support after the migration is complete; however, they will separately announce on the support.”

More Incentives for the Community

The Aave team also plans to put under consideration the possibility of including part of the fees within these Safety Modules. In this way, it is guaranteed that the security of the protocol increases with its usability.

stani
Kulechov, Aave CEO. Source: Twitter

“The SM will act as a recapitalization mechanism, so in the case of a shortfall event, your stake may be slashed up to 30% to cover the deficit. The idea behind “safety mining” is to reward community members who stake their AAVE to promote the safety of the protocol.

In addition to the Safety Incentives, users would have the opportunity to earn Ecosystem Incentives (EI) for supplying and borrowing assets from within the platform. They hope that the community will also be able to decide how to distribute specific incentives in the near future.

Aave is one of the longest-running DeFi protocols in the ecosystem. It allows users to lend and borrow certain assets, putting others as collateral. However, interests are automatically determined by supply and demand according to parameters coded within the protocol.

Besides, Aave became famous for allowing flash loans. This mode allows a person to take an unsecured loan on the condition that it is repaid before the next block is mined.

SPECIAL OFFER (Sponsored)
Binance Futures 50 USDT FREE Voucher: Use this link to register & get 10% off fees and 50 USDT when trading 500 USDT (limited – first 200 sign-ups & exclusive to CryptoPotato).

Click here to start trading on BitMEX and receive 10% discount on fees for 6 months.


Source: https://cryptopotato.com/aave-governance-is-now-on-mainnet-incoming-1001-token-split-for-lend/

Continue Reading

Trending