Discussion:
[bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
Damian Williamson via bitcoin-dev
2017-12-07 21:01:43 UTC
Permalink
Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.

I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.

Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.

The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.

Regards,
Damian Williamson
Damian Williamson via bitcoin-dev
2017-12-15 09:42:42 UTC
Permalink
I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.

It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.


If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?

Regards,
Damian Williamson

________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Friday, 8 December 2017 8:01 AM
To: bitcoin-***@lists.linuxfoundation.org
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.

I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.

Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.

The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.

Regards,
Damian Williamson
Rhavar via bitcoin-dev
2017-12-15 16:38:46 UTC
Permalink
> I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

Unfortunately your proposal is really fundamentally broken, on a few levels. I think you might need to do a bit more research into how bitcoin works before coming up with such improvements =)

But just some quick notes:

* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.

-Ryan

> -------- Original Message --------
> Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
> Local Time: December 15, 2017 3:42 AM
> UTC Time: December 15, 2017 9:42 AM
> From: bitcoin-***@lists.linuxfoundation.org
> To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>
>
> I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?
>
> I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.
>
> It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.
>
> If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?
>
> Regards,
> Damian Williamson
>
> ---------------------------------------------------------------
>
> From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
> Sent: Friday, 8 December 2017 8:01 AM
> To: bitcoin-***@lists.linuxfoundation.org
> Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
>
> Good afternoon,
>
> The need for this proposal:
>
> We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.
>
> I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.
>
> I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.
>
> Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.
>
> Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.
>
> Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.
>
> I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.
>
> I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.
>
> This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.
>
> I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.
>
> # BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
>
> ## The problem:
> Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.
>
> The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.
>
> Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.
>
> Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.
>
> The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.
>
> ## Solution summary:
> Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.
>
> Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.
>
> The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?
>
> **Explanation of the operation of priority:**
>> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.
>
>>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.
>
> I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.
>
> ## Pros:
> * Maximizes transaction reliability.
> * Fully scalable.
> * Maximizes possibility for consumer and business uptake.
> * Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
> * Market determines fee paid for transaction priority.
> * Fee recommendations work all the way out to 30 days or greater.
> * Provides additional block entropy; greater security since there is less probability of predicting the next block.
>
> ## Cons:
> * Could initially lower total transaction fees per block.
> * Must be first be programmed.
>
> ## Solution operation:
> This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.
>
> 1. Determine the target block size for the current block.
> 2. Assign a transaction priority to each transaction in the pool.
> 3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
> 5. Solve block.
> 6. Broadcast the next target block size with the current block when it is solved.
> 7. Block is received.
> 8. Block verification process.
> 9. Accept/reject block based on verification result.
> 10. Repeat.
>
> ## Closing comments:
> It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.
>
> I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.
>
> Regards,
> Damian Williamson
Damian Williamson via bitcoin-dev
2017-12-15 20:59:51 UTC
Permalink
There are really two separate problems to solve.


1. How does Bitcoin scale with fixed block size?
2. How do we ensure that all valid transactions are eventually included in the blockchain?


Those are the two issues that the proposal attempts to address. It makes sense to resolve these two problems together. Using the proposed system for variable block sizes would solve the first problem but there would still be a whole bunch of never confirming transactions. I am not sure how to reliably solve the second problem at scale without first solving the first.


>* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

I do not suggest a consensus. Depending on which node solves a block the value for next block size will be different. The consensus would be that blocks will adhere to the next block size value transmitted with the current block. It is easy to verify that the consensus is being adhered to once in place.

>* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

Not a necessary function, just an effect of using a probability-based distribution.

>* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

I entirely agree with your sentiment that Bitcoin must be incentive compatible. It is necessary.

It is in only miners immediate interest to make the most profitable block from the available transaction pool. As with so many other things, it is necessary to partially ignore short-term gain for long-term benefit. It is in miners and everybody's long-term interest to have a reliable transaction service. A busy transaction service that confirms lots of transactions per hour will become more profitable as demand increases and more users are prepared to pay for priority. As it is there is currently no way to fully scale because of the transaction bandwidth limit and that is problematic. If all valid transactions must eventually confirm then there must be a way to resolve that problem.

Bitcoin deliberately removes traditional scale by ensuring blocks take ten minutes on average to solve, an ingenious idea and, incentive compatible but, fixed block sizes leaves us with a problem to solve when we want to scale.

>If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.

I am confident that the math to verify blocks based on the proposal can be developed (and I think it will not be too complex for a mathematician with the relevant experience), however, I am nowhere near experienced enough with probability and statistical analysis to do it. Yes, if Bitcoin doesn't then it might make another great opportunity for an altcoin but I am not even nearly interested in promoting any altcoins.


If not the proposal that I have put forward, then, hopefully, someone can come up with a better solution. The important thing is that the issues are resolved.


Regards,

Damian Williamson


________________________________
From: Rhavar <***@protonmail.com>
Sent: Saturday, 16 December 2017 3:38 AM
To: Damian Williamson
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

> I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

Unfortunately your proposal is really fundamentally broken, on a few levels. I think you might need to do a bit more research into how bitcoin works before coming up with such improvements =)

But just some quick notes:

* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.




-Ryan


-------- Original Message --------
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
Local Time: December 15, 2017 3:42 AM
UTC Time: December 15, 2017 9:42 AM
From: bitcoin-***@lists.linuxfoundation.org
To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>




I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.

It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.



If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?

Regards,
Damian Williamson


________________________________

From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Friday, 8 December 2017 8:01 AM
To: bitcoin-***@lists.linuxfoundation.org
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks



Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.

I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.

Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.

The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.

Regards,
Damian Williamson
Damian Williamson via bitcoin-dev
2017-12-17 04:14:39 UTC
Permalink
I do not know why people make the leap that the proposal requires a consensus on the transaction pool. It does not.


It may be helpful to have the discussion from the previous thread linked here.

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015370.html


Where I speak of validating that a block conforms to the broadcast next block size, I do not propose validating the number broadcast for the next block size itself, only that the next generated block is that size.


Regards,

Damian Williamson


________________________________
From: Damian Williamson <***@live.com.au>
Sent: Saturday, 16 December 2017 7:59 AM
To: Rhavar
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


There are really two separate problems to solve.


1. How does Bitcoin scale with fixed block size?
2. How do we ensure that all valid transactions are eventually included in the blockchain?


Those are the two issues that the proposal attempts to address. It makes sense to resolve these two problems together. Using the proposed system for variable block sizes would solve the first problem but there would still be a whole bunch of never confirming transactions. I am not sure how to reliably solve the second problem at scale without first solving the first.


>* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

I do not suggest a consensus. Depending on which node solves a block the value for next block size will be different. The consensus would be that blocks will adhere to the next block size value transmitted with the current block. It is easy to verify that the consensus is being adhered to once in place.

>* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

Not a necessary function, just an effect of using a probability-based distribution.

>* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

I entirely agree with your sentiment that Bitcoin must be incentive compatible. It is necessary.

It is in only miners immediate interest to make the most profitable block from the available transaction pool. As with so many other things, it is necessary to partially ignore short-term gain for long-term benefit. It is in miners and everybody's long-term interest to have a reliable transaction service. A busy transaction service that confirms lots of transactions per hour will become more profitable as demand increases and more users are prepared to pay for priority. As it is there is currently no way to fully scale because of the transaction bandwidth limit and that is problematic. If all valid transactions must eventually confirm then there must be a way to resolve that problem.

Bitcoin deliberately removes traditional scale by ensuring blocks take ten minutes on average to solve, an ingenious idea and, incentive compatible but, fixed block sizes leaves us with a problem to solve when we want to scale.

>If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.

I am confident that the math to verify blocks based on the proposal can be developed (and I think it will not be too complex for a mathematician with the relevant experience), however, I am nowhere near experienced enough with probability and statistical analysis to do it. Yes, if Bitcoin doesn't then it might make another great opportunity for an altcoin but I am not even nearly interested in promoting any altcoins.


If not the proposal that I have put forward, then, hopefully, someone can come up with a better solution. The important thing is that the issues are resolved.


Regards,

Damian Williamson


________________________________
From: Rhavar <***@protonmail.com>
Sent: Saturday, 16 December 2017 3:38 AM
To: Damian Williamson
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

> I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

Unfortunately your proposal is really fundamentally broken, on a few levels. I think you might need to do a bit more research into how bitcoin works before coming up with such improvements =)

But just some quick notes:

* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.




-Ryan


-------- Original Message --------
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
Local Time: December 15, 2017 3:42 AM
UTC Time: December 15, 2017 9:42 AM
From: bitcoin-***@lists.linuxfoundation.org
To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>




I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.

It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.



If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?

Regards,
Damian Williamson


________________________________

From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Friday, 8 December 2017 8:01 AM
To: bitcoin-***@lists.linuxfoundation.org
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks



Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.

I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.

Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.

The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.

Regards,
Damian Williamson
Damian Williamson via bitcoin-dev
2017-12-19 07:51:39 UTC
Permalink
Thank you for your constructive feedback. I now see that the proposal introduces a potential issue.


>Finally in terms of the broad goal, having block size based on the number of transactions is NOT something desirable in the first place, even if it did work. ThatÂ’s effectively the same as an infinite block size since anyone anywhere can create transactions in the mempool at no cost.


Do you have any critical suggestion as to how transaction bandwidth limit could be addressed, it will eventually become an issue if nothing is changed regardless of how high fees go?


Regards,

Damian Williamson



________________________________
From: Mark Friedenbach <***@friedenbach.org>
Sent: Tuesday, 19 December 2017 3:08 AM
To: Damian Williamson
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Damian, you seem to be misunderstanding that either

(1) the strong form of your proposal requires validating the commitment to the mempool properties, in which case the mempool becomes consensus critical (an impossible requirement); or

(2) in the weak form where the current block is dependent on the commitment in the last block only it is becomes a miner-selected field they can freely parameterize with no repercussions for setting values totally independent of the actual mempool.

If you want to make the block size dependent on the properties of the mempool in a consensus critical way, flex cap achieves this. If you want to make the contents or properties of the mempool known to well-connected nodes, weak blocks achieves that. But you canÂ’t stick the mempool in consensus because it fundamentally is not something the nodes have consensus over. ThatÂ’s a chicken-and-the-egg assumption.

Finally in terms of the broad goal, having block size based on the number of transactions is NOT something desirable in the first place, even if it did work. ThatÂ’s effectively the same as an infinite block size since anyone anywhere can create transactions in the mempool at no cost.

On Dec 16, 2017, at 8:14 PM, Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:

I do not know why people make the leap that the proposal requires a consensus on the transaction pool. It does not.

It may be helpful to have the discussion from the previous thread linked here.
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015370.html

Where I speak of validating that a block conforms to the broadcast next block size, I do not propose validating the number broadcast for the next block size itself, only that the next generated block is that size.

Regards,
Damian Williamson


________________________________
From: Damian Williamson <***@live.com.au<mailto:***@live.com.au>>
Sent: Saturday, 16 December 2017 7:59 AM
To: Rhavar
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

There are really two separate problems to solve.


1. How does Bitcoin scale with fixed block size?
2. How do we ensure that all valid transactions are eventually included in the blockchain?

Those are the two issues that the proposal attempts to address. It makes sense to resolve these two problems together. Using the proposed system for variable block sizes would solve the first problem but there would still be a whole bunch of never confirming transactions. I am not sure how to reliably solve the second problem at scale without first solving the first.

>* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

I do not suggest a consensus. Depending on which node solves a block the value for next block size will be different. The consensus would be that blocks will adhere to the next block size value transmitted with the current block. It is easy to verify that the consensus is being adhered to once in place.

>* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

Not a necessary function, just an effect of using a probability-based distribution.

>* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

I entirely agree with your sentiment that Bitcoin must be incentive compatible. It is necessary.

It is in only miners immediate interest to make the most profitable block from the available transaction pool. As with so many other things, it is necessary to partially ignore short-term gain for long-term benefit. It is in miners and everybody's long-term interest to have a reliable transaction service. A busy transaction service that confirms lots of transactions per hour will become more profitable as demand increases and more users are prepared to pay for priority. As it is there is currently no way to fully scale because of the transaction bandwidth limit and that is problematic. If all valid transactions must eventually confirm then there must be a way to resolve that problem.

Bitcoin deliberately removes traditional scale by ensuring blocks take ten minutes on average to solve, an ingenious idea and, incentive compatible but, fixed block sizes leaves us with a problem to solve when we want to scale.

>If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.

I am confident that the math to verify blocks based on the proposal can be developed (and I think it will not be too complex for a mathematician with the relevant experience), however, I am nowhere near experienced enough with probability and statistical analysis to do it. Yes, if Bitcoin doesn't then it might make another great opportunity for an altcoin but I am not even nearly interested in promoting any altcoins.

If not the proposal that I have put forward, then, hopefully, someone can come up with a better solution. The important thing is that the issues are resolved.

Regards,
Damian Williamson


________________________________
From: Rhavar <***@protonmail.com<mailto:***@protonmail.com>>
Sent: Saturday, 16 December 2017 3:38 AM
To: Damian Williamson
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

> I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

Unfortunately your proposal is really fundamentally broken, on a few levels. I think you might need to do a bit more research into how bitcoin works before coming up with such improvements =)

But just some quick notes:

* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.




-Ryan


-------- Original Message --------
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
Local Time: December 15, 2017 3:42 AM
UTC Time: December 15, 2017 9:42 AM
From: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>



I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.

It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.


If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?

Regards,
Damian Williamson


________________________________

From: bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org> <bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org>> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
Sent: Friday, 8 December 2017 8:01 AM
To: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.

I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.

Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.

The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.

Regards,
Damian Williamson
Damian Williamson via bitcoin-dev
2017-12-22 06:22:40 UTC
Permalink
If the cash value of Bitcoin was high enough and zero fee transactions were never accepted and not counted when calculating the transaction pool size then I do not think it would be such an issue. Why is it even possible to create zero fee transactions?


Regards,

Damian Williamson

________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Tuesday, 19 December 2017 6:51 PM
To: Mark Friedenbach
Cc: bitcoin-***@lists.linuxfoundation.org
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


Thank you for your constructive feedback. I now see that the proposal introduces a potential issue.


>Finally in terms of the broad goal, having block size based on the number of transactions is NOT something desirable in the first place, even if it did work. ThatÂ’s effectively the same as an infinite block size since anyone anywhere can create transactions in the mempool at no cost.


Do you have any critical suggestion as to how transaction bandwidth limit could be addressed, it will eventually become an issue if nothing is changed regardless of how high fees go?


Regards,

Damian Williamson



________________________________
From: Mark Friedenbach <***@friedenbach.org>
Sent: Tuesday, 19 December 2017 3:08 AM
To: Damian Williamson
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Damian, you seem to be misunderstanding that either

(1) the strong form of your proposal requires validating the commitment to the mempool properties, in which case the mempool becomes consensus critical (an impossible requirement); or

(2) in the weak form where the current block is dependent on the commitment in the last block only it is becomes a miner-selected field they can freely parameterize with no repercussions for setting values totally independent of the actual mempool.

If you want to make the block size dependent on the properties of the mempool in a consensus critical way, flex cap achieves this. If you want to make the contents or properties of the mempool known to well-connected nodes, weak blocks achieves that. But you canÂ’t stick the mempool in consensus because it fundamentally is not something the nodes have consensus over. ThatÂ’s a chicken-and-the-egg assumption.

Finally in terms of the broad goal, having block size based on the number of transactions is NOT something desirable in the first place, even if it did work. ThatÂ’s effectively the same as an infinite block size since anyone anywhere can create transactions in the mempool at no cost.

On Dec 16, 2017, at 8:14 PM, Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:

I do not know why people make the leap that the proposal requires a consensus on the transaction pool. It does not.

It may be helpful to have the discussion from the previous thread linked here.
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015370.html

Where I speak of validating that a block conforms to the broadcast next block size, I do not propose validating the number broadcast for the next block size itself, only that the next generated block is that size.

Regards,
Damian Williamson


________________________________
From: Damian Williamson <***@live.com.au<mailto:***@live.com.au>>
Sent: Saturday, 16 December 2017 7:59 AM
To: Rhavar
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

There are really two separate problems to solve.


1. How does Bitcoin scale with fixed block size?
2. How do we ensure that all valid transactions are eventually included in the blockchain?

Those are the two issues that the proposal attempts to address. It makes sense to resolve these two problems together. Using the proposed system for variable block sizes would solve the first problem but there would still be a whole bunch of never confirming transactions. I am not sure how to reliably solve the second problem at scale without first solving the first.

>* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

I do not suggest a consensus. Depending on which node solves a block the value for next block size will be different. The consensus would be that blocks will adhere to the next block size value transmitted with the current block. It is easy to verify that the consensus is being adhered to once in place.

>* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

Not a necessary function, just an effect of using a probability-based distribution.

>* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

I entirely agree with your sentiment that Bitcoin must be incentive compatible. It is necessary.

It is in only miners immediate interest to make the most profitable block from the available transaction pool. As with so many other things, it is necessary to partially ignore short-term gain for long-term benefit. It is in miners and everybody's long-term interest to have a reliable transaction service. A busy transaction service that confirms lots of transactions per hour will become more profitable as demand increases and more users are prepared to pay for priority. As it is there is currently no way to fully scale because of the transaction bandwidth limit and that is problematic. If all valid transactions must eventually confirm then there must be a way to resolve that problem.

Bitcoin deliberately removes traditional scale by ensuring blocks take ten minutes on average to solve, an ingenious idea and, incentive compatible but, fixed block sizes leaves us with a problem to solve when we want to scale.

>If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.

I am confident that the math to verify blocks based on the proposal can be developed (and I think it will not be too complex for a mathematician with the relevant experience), however, I am nowhere near experienced enough with probability and statistical analysis to do it. Yes, if Bitcoin doesn't then it might make another great opportunity for an altcoin but I am not even nearly interested in promoting any altcoins.

If not the proposal that I have put forward, then, hopefully, someone can come up with a better solution. The important thing is that the issues are resolved.

Regards,
Damian Williamson


________________________________
From: Rhavar <***@protonmail.com<mailto:***@protonmail.com>>
Sent: Saturday, 16 December 2017 3:38 AM
To: Damian Williamson
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

> I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

Unfortunately your proposal is really fundamentally broken, on a few levels. I think you might need to do a bit more research into how bitcoin works before coming up with such improvements =)

But just some quick notes:

* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.




-Ryan


-------- Original Message --------
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
Local Time: December 15, 2017 3:42 AM
UTC Time: December 15, 2017 9:42 AM
From: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>



I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.

It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.


If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?

Regards,
Damian Williamson


________________________________

From: bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org> <bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org>> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
Sent: Friday, 8 December 2017 8:01 AM
To: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.

I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.

Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.

The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.

Regards,
Damian Williamson
Spartacus Rex via bitcoin-dev
2017-12-22 18:07:49 UTC
Permalink
Hi Damian,

Thought I'd chip in. This is a hard fork scenario. This system has flaws,
they all do.

If you had a fixed fee per block, so that every txn in that block paid the
same fee, that might make it easier to include all txns eventually, as you
envisage.

The fee could be calculated as the average of the amount txns are prepared
to pay in the last 1000 blocks.

A txn would say ' I'll pay up to X bitcoins ' and as long as that is more
than the value required for the block your txn can be added. This is to
ensure you don't pay more than you are willing. It also ensures that
putting an enormous fee will not ensure your txn is processed quickly..

Calculating what the outputs are given a variable fee needs a new mechanism
all of it's own, but I'm sure it's possible.

The simple fact is that there is currently no known system that works as
well as the current system..

But there are other systems.


On Dec 22, 2017 15:09, "Damian Williamson via bitcoin-dev" <
bitcoin-***@lists.linuxfoundation.org> wrote:

> If the cash value of Bitcoin was high enough and zero fee transactions
> were never accepted and not counted when calculating the transaction pool
> size then I do not think it would be such an issue. Why is it even possible
> to create zero fee transactions?
>
>
> Regards,
>
> Damian Williamson
>
> ------------------------------
> *From:* bitcoin-dev-***@lists.linuxfoundation.org <
> bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian
> Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
> *Sent:* Tuesday, 19 December 2017 6:51 PM
> *To:* Mark Friedenbach
> *Cc:* bitcoin-***@lists.linuxfoundation.org
> *Subject:* [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
> Transaction Priority For Ordering Transactions In Blocks
>
>
> Thank you for your constructive feedback. I now see that the proposal
> introduces a potential issue.
>
>
> >Finally in terms of the broad goal, having block size based on the number
> of transactions is NOT something desirable in the first place, even if it
> did work. That’s effectively the same as an infinite block size since
> anyone anywhere can create transactions in the mempool at no cost.
>
>
> Do you have any critical suggestion as to how transaction bandwidth limit
> could be addressed, it will eventually become an issue if nothing is
> changed regardless of how high fees go?
>
>
> Regards,
> Damian Williamson
>
>
>
> ------------------------------
> *From:* Mark Friedenbach <***@friedenbach.org>
> *Sent:* Tuesday, 19 December 2017 3:08 AM
> *To:* Damian Williamson
> *Subject:* Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
> Transaction Priority For Ordering Transactions In Blocks
>
> Damian, you seem to be misunderstanding that either
>
> (1) the strong form of your proposal requires validating the commitment to
> the mempool properties, in which case the mempool becomes consensus
> critical (an impossible requirement); or
>
> (2) in the weak form where the current block is dependent on the
> commitment in the last block only it is becomes a miner-selected field they
> can freely parameterize with no repercussions for setting values totally
> independent of the actual mempool.
>
> If you want to make the block size dependent on the properties of the
> mempool in a consensus critical way, flex cap achieves this. If you want to
> make the contents or properties of the mempool known to well-connected
> nodes, weak blocks achieves that. But you can’t stick the mempool in
> consensus because it fundamentally is not something the nodes have
> consensus over. That’s a chicken-and-the-egg assumption.
>
> Finally in terms of the broad goal, having block size based on the number
> of transactions is NOT something desirable in the first place, even if it
> did work. That’s effectively the same as an infinite block size since
> anyone anywhere can create transactions in the mempool at no cost.
>
> On Dec 16, 2017, at 8:14 PM, Damian Williamson via bitcoin-dev <
> bitcoin-***@lists.linuxfoundation.org> wrote:
>
> I do not know why people make the leap that the proposal requires a
> consensus on the transaction pool. It does not.
>
> It may be helpful to have the discussion from the previous thread linked
> here.
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/
> 2017-December/015370.html
>
> Where I speak of validating that a block conforms to the broadcast next
> block size, I do not propose validating the number broadcast for the next
> block size itself, only that the next generated block is that size.
>
> Regards,
> Damian Williamson
>
>
> ------------------------------
> *From:* Damian Williamson <***@live.com.au>
> *Sent:* Saturday, 16 December 2017 7:59 AM
> *To:* Rhavar
> *Cc:* Bitcoin Protocol Discussion
> *Subject:* Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
> Transaction Priority For Ordering Transactions In Blocks
>
> There are really two separate problems to solve.
>
>
> 1. How does Bitcoin scale with fixed block size?
> 2. How do we ensure that all valid transactions are eventually
> included in the blockchain?
>
>
> Those are the two issues that the proposal attempts to address. It makes
> sense to resolve these two problems together. Using the proposed system for
> variable block sizes would solve the first problem but there would still be
> a whole bunch of never confirming transactions. I am not sure how to
> reliably solve the second problem at scale without first solving the first.
>
> >* Every node has a (potentially) different mempool, you can't use it to
> decide consensus values like the max block size.
>
> I do not suggest a consensus. Depending on which node solves a block the
> value for next block size will be different. The consensus would be that
> blocks will adhere to the next block size value transmitted with the
> current block. It is easy to verify that the consensus is being adhered to
> once in place.
>
> >* Increasing the entropy in a block to make it more unpredictable doesn't
> really make sense.
>
> Not a necessary function, just an effect of using a probability-based
> distribution.
>
> >* Bitcoin should be roughly incentive compatible. Your proposal explicits
> asks miners to ignore their best interests, and confirm transactions by
> "priority". What are you going to do if a "malicious" miner decides to go
> after their profits and order by what makes them the most money. Add
> "ordered by priority" as a consensus requirement? And even if you miners
> can still sort their mempool by fee, and then order the top 1MB by priority.
>
> I entirely agree with your sentiment that Bitcoin must be incentive
> compatible. It is necessary.
>
> It is in only miners immediate interest to make the most profitable block
> from the available transaction pool. As with so many other things, it is
> necessary to partially ignore short-term gain for long-term benefit. It is
> in miners and everybody's long-term interest to have a reliable transaction
> service. A busy transaction service that confirms lots of transactions per
> hour will become more profitable as demand increases and more users are
> prepared to pay for priority. As it is there is currently no way to fully
> scale because of the transaction bandwidth limit and that is problematic.
> If all valid transactions must eventually confirm then there must be a way
> to resolve that problem.
>
> Bitcoin deliberately removes traditional scale by ensuring blocks take ten
> minutes on average to solve, an ingenious idea and, incentive compatible
> but, fixed block sizes leaves us with a problem to solve when we want to
> scale.
>
> >If you could find a good solution that would allow you to know if miners
> were following your rule or not (and thus ignore it if it doesn't) then you
> wouldn't even need bitcoin in the first place.
>
> I am confident that the math to verify blocks based on the proposal can be
> developed (and I think it will not be too complex for a mathematician with
> the relevant experience), however, I am nowhere near experienced enough
> with probability and statistical analysis to do it. Yes, if Bitcoin doesn't
> then it might make another great opportunity for an altcoin but I am not
> even nearly interested in promoting any altcoins.
>
>
> If not the proposal that I have put forward, then, hopefully, someone can
> come up with a better solution. The important thing is that the issues are
> resolved.
>
> Regards,
> Damian Williamson
>
>
> ------------------------------
> *From:* Rhavar <***@protonmail.com>
> *Sent:* Saturday, 16 December 2017 3:38 AM
> *To:* Damian Williamson
> *Cc:* Bitcoin Protocol Discussion
> *Subject:* Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
> Transaction Priority For Ordering Transactions In Blocks
>
> > I understand that there would be technical issues to resolve in
> implementation, but, are there no fundamental errors?
>
> Unfortunately your proposal is really fundamentally broken, on a few
> levels. I think you might need to do a bit more research into how bitcoin
> works before coming up with such improvements =)
>
> But just some quick notes:
>
> * Every node has a (potentially) different mempool, you can't use it to
> decide consensus values like the max block size.
>
> * Increasing the entropy in a block to make it more unpredictable doesn't
> really make sense.
>
> * Bitcoin should be roughly incentive compatible. Your proposal explicits
> asks miners to ignore their best interests, and confirm transactions by
> "priority". What are you going to do if a "malicious" miner decides to go
> after their profits and order by what makes them the most money. Add
> "ordered by priority" as a consensus requirement? And even if you miners
> can still sort their mempool by fee, and then order the top 1MB by priority.
>
> If you could find a good solution that would allow you to know if miners
> were following your rule or not (and thus ignore it if it doesn't) then you
> wouldn't even need bitcoin in the first place.
>
>
>
>
> -Ryan
>
>
> -------- Original Message --------
> Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction
> Priority For Ordering Transactions In Blocks
> Local Time: December 15, 2017 3:42 AM
> UTC Time: December 15, 2017 9:42 AM
> From: bitcoin-***@lists.linuxfoundation.org
> To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>
>
>
>
> I should not take it that the lack of critical feedback to this revised
> proposal is a glowing endorsement. I understand that there would be
> technical issues to resolve in implementation, but, are there no
> fundamental errors?
>
> I suppose that it if is difficult to determine how long a transaction has
> been waiting in the pool then, each node could simply keep track of when a
> transaction was first seen. This may have implications for a verify
> routine, however, for example, if a node was offline, how should it
> differentiate how long each transaction was waiting in that case? If a node
> was restarted daily would it always think that all transactions had been
> waiting in the pool less than one day If each node keeps the current
> transaction pool in a file and updates it, as transactions are included in
> blocks and, as new transactions appear in the pool, then that would go some
> way to alleviate the issue, apart from entirely new nodes. There should be
> no reason the contents of a transaction pool files cannot be shared without
> agreement as to the transaction pool between nodes, just as nodes
> transmit new transactions freely.
>
> It has been questioned why miners could not cheat. For the question of how
> many transactions to include in a block, I say it is a standoff and miners
> will conform to the proposal, not wanting to leave transactions with valid
> fees standing, and, not wanting to shrink the transaction pool. In any
> case, if miners shrink the transaction pool then I am not immediately
> concerned since it provides a more efficient service. For the question of
> including transactions according to the proposal, I say if it is possible
> to keep track of how long transactions are waiting in the pool so that they
> can be included on a probability curve then it is possible to verify that
> blocks conform to the proposal, since the input is a probability, the
> output should conform to a probability curve.
>
>
> If someone has the necessary skill, would anyone be willing to develop the
> math necessary for the proposal?
>
> Regards,
> Damian Williamson
>
>
> ------------------------------
>
> *From:* bitcoin-dev-***@lists.linuxfoundation.org <bit
> coin-dev-***@lists.linuxfoundation.org> on behalf of Damian
> Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
> *Sent:* Friday, 8 December 2017 8:01 AM
> *To:* bitcoin-***@lists.linuxfoundation.org
> *Subject:* [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
> Transaction Priority For Ordering Transactions In Blocks
>
>
> Good afternoon,
>
> The need for this proposal:
>
> We all must learn to admit that transaction bandwidth is still lurking as
> a serious issue for the operation, reliability, safety, consumer
> acceptance, uptake and, for the value of Bitcoin.
>
> I recently sent a payment which was not urgent so; I chose three-day
> target confirmation from the fee recommendation. That transaction has still
> not confirmed after now more than six days - even waiting twice as long
> seems quite reasonable to me. That transaction is a valid transaction; it
> is not rubbish, junk or, spam. Under the current model with transaction
> bandwidth limitation, the longer a transaction waits, the less likely it is
> ever to confirm due to rising transaction numbers and being pushed back by
> transactions with rising fees.
>
> I argue that no transactions are rubbish or junk, only some zero fee
> transactions might be spam. Having an ever-increasing number of valid
> transactions that do not confirm as more new transactions with higher fees
> are created is the opposite of operating a robust, reliable transaction
> system.
>
> Business cannot operate with a model where transactions may or may not
> confirm. Even a business choosing a modest fee has no guarantee that their
> valid transaction will not be shuffled down by new transactions to the
> realm of never confirming after it is created. Consumers also will not
> accept this model as Bitcoin expands. If Bitcoin cannot be a reliable
> payment system for confirmed transactions then consumers, by and large,
> will simply not accept the model once they understand. Bitcoin will be a
> dirty payment system, and this will kill the value of Bitcoin.
>
> Under the current system, a minority of transactions will eventually be
> the lucky few who have fees high enough to escape being pushed down the
> list.
>
> Once there are more than x transactions (transaction bandwidth limit)
> every ten minutes, only those choosing twenty-minute confirmation (2
> blocks) will have initially at most a fifty percent chance of ever having
> their payment confirm. Presently, not even using fee recommendations can
> ensure a sufficiently high fee is paid to ensure transaction confirmation.
>
> I also argue that the current auction model for limited transaction
> bandwidth is wrong, is not suitable for a reliable transaction system and,
> is wrong for Bitcoin. All transactions must confirm in due time. Currently,
> Bitcoin is not a safe way to send payments.
>
> I do not believe that consumers and business are against paying fees, even
> high fees. What is required is operational reliability.
>
> This great issue needs to be resolved for the safety and reliability of
> Bitcoin. The time to resolve issues in commerce is before they become great
> big issues. The time to resolve this issue is now. We must have the
> foresight to identify and resolve problems before they trip us over.
> Simply doubling block sizes every so often is reactionary and is not a
> reliable permanent solution. I have written a BIP proposal for a technical
> solution but, need your help to write it up to an acceptable standard to be
> a full BIP.
>
> I have formatted the following with markdown which is human readable so, I
> hope nobody minds. I have done as much with this proposal as I feel that I
> am able so far but continue to take your feedback.
>
> # BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering
> Transactions In Blocks
>
> ## The problem:
> Everybody wants value. Miners want to maximize revenue from fees (and we
> presume, to minimize block size). Consumers need transaction reliability
> and, (we presume) want low fees.
>
> The current transaction bandwidth limit is a limiting factor for both. As
> the operational safety of transactions is limited, so is consumer
> confidence as they realize the issue and, accordingly, uptake is limited.
> Fees are artificially inflated due to bandwidth limitations while failing
> to provide a full confirmation service for all transactions.
>
> Current fee recommendations provide no satisfaction for transaction
> reliability and, as Bitcoin scales, this will worsen.
>
> Bitcoin must be a fully scalable and reliable service, providing full
> transaction confirmation for every valid transaction.
>
> The possibility to send a transaction with a fee lower than one that is
> acceptable to allow eventual transaction confirmation should be removed
> from the protocol and also from the user interface.
>
> ## Solution summary:
> Provide each transaction with an individual transaction priority each time
> before choosing transactions to include in the current block, the priority
> being a function of the fee paid (on a curve), and the time waiting in the
> transaction pool (also on a curve) out to n days (n=60 ?). The transaction
> priority to serve as the likelihood of a transaction being included in the
> current block, and for determining the order in which transactions are
> tried to see if they will be included.
>
> Use a target block size. Determine the target block size using; current
> transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to
> be included in the current block. Broadcast the next target block size with
> the current block when it is solved so that nodes know the next target
> block size for the block that they are building on.
>
> The curves used for the priority of transactions would have to be
> appropriate. Perhaps a mathematician with experience in probability can
> develop the right formulae. My thinking is a steep curve. I suppose that
> the probability of all transactions should probably account for a
> sufficient number of inclusions that the target block size is met although,
> it may not always be. As a suggestion, consider including some zero fee
> transactions to pad, highest BTC value first?
>
> **Explanation of the operation of priority:**
> > If transaction priority is, for example, a number between one (low) and
> one-hundred (high) it can be directly understood as the percentage chance
> in one-hundred of a transaction being included in the block. Using
> probability or likelihood infers that there is some function of random. If
> random (100) < transaction priority then the transaction is included.
>
> >To break it down further, if both the fee on a curve value and the time
> waiting on a curve value are each a number between one and one-hundred, a
> rudimentary method may be to simply multiply those two numbers, to find the
> priority number. For example, a middle fee transaction waiting thirty days
> (if n = 60 days) may have a value of five for each part (yes, just five,
> the values are on a curve). When multiplied that will give a priority value
> of twenty-five, or, a twenty-five percent chance at that moment of being
> included in the block; it will likely be included in one of the next four
> blocks, getting more likely each chance. If it is still not included then
> the value of time waiting will be higher, making for more probability. A
> very low fee transaction would have a value for the fee of one. It would
> not be until near sixty-days that the particular low fee transaction has a
> high likelihood of being included in the block.
>
> I am not concerned with low (or high) transaction fees, the primary reason
> for addressing the issue is to ensure transactional reliability and
> scalability while having each transaction confirm in due time.
>
> ## Pros:
> * Maximizes transaction reliability.
> * Fully scalable.
> * Maximizes possibility for consumer and business uptake.
> * Maximizes total fees paid per block without reducing reliability;
> because of reliability, in time confidence and overall uptake are greater;
> therefore, more transactions.
> * Market determines fee paid for transaction priority.
> * Fee recommendations work all the way out to 30 days or greater.
> * Provides additional block entropy; greater security since there is less
> probability of predicting the next block.
>
> ## Cons:
> * Could initially lower total transaction fees per block.
> * Must be first be programmed.
>
> ## Solution operation:
> This is a simplistic view of the operation. The actual operation will need
> to be determined in a spec for the programmer.
>
> 1. Determine the target block size for the current block.
> 2. Assign a transaction priority to each transaction in the pool.
> 3. Select transactions to include in the current block using probability
> in transaction priority order until the target block size is met.
> 5. Solve block.
> 6. Broadcast the next target block size with the current block when it is
> solved.
> 7. Block is received.
> 8. Block verification process.
> 9. Accept/reject block based on verification result.
> 10. Repeat.
>
> ## Closing comments:
> It may be possible to verify blocks conform to the proposal by showing
> that the probability for all transactions included in the block
> statistically conforms to a probability distribution curve, *if* the
> individual transaction priority can be recreated. I am not that deep into
> the mathematics; however, it may also be possible to use a similar method
> to do this just based on the fee, that statistically, the blocks conform to
> a fee distribution. Any zero fee transactions would have to be ignored.
> This solution needs a clever mathematician.
>
> I implore, at the very least, that we use some method that validates full
> transaction reliability and enables scalability of block sizes. If not this
> proposal, an alternative.
>
> Regards,
> Damian Williamson
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Spartacus Rex via bitcoin-dev
2017-12-24 09:02:09 UTC
Permalink
..What you have proposed is interesting but seems to do nothing for the
issue of transaction
bandwidth, which seems to be approaching its threshold:
..

This system just shows one way of changing the way a miner calculates txn
priority.

A miners should always do what makes him the most money, so an old txn will
never get priority if a newer one offering more fees comes along. This is
why some txns will never get confirmation.

In this system a txn cannot just pay more fees, as you all pay the same
fees in a block, so an old txn that has a high enough threshold will be
worth just as much to a miner as any txn coming later on.

This way you can be sure that your txn will confirm at some point, and not
just be relegated to the 'never' confirmed pile.


On Dec 24, 2017 03:44, "Damian Williamson" <***@live.com.au> wrote:

>.. This system has flaws, they all do.


>The simple fact is that there is currently no known system that works as
well as the current system..


Alright, but, we seem to agree, the current system also has flaws. The
transaction bandwidth limit is a serious issue for transactional
reliability.


What you have proposed is interesting but seems to do nothing for the issue
of transaction bandwidth, which seems to be approaching its threshold:

https://bitinfocharts.com/comparison/bitcoin-transactions.html


Regards,

Damian Williamson
------------------------------
*From:* Spartacus Rex <***@gmail.com>
*Sent:* Saturday, 23 December 2017 5:07:49 AM
*To:* Damian Williamson; Bitcoin Protocol Discussion

*Subject:* Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
Transaction Priority For Ordering Transactions In Blocks

Hi Damian,

Thought I'd chip in. This is a hard fork scenario. This system has flaws,
they all do.

If you had a fixed fee per block, so that every txn in that block paid the
same fee, that might make it easier to include all txns eventually, as you
envisage.

The fee could be calculated as the average of the amount txns are prepared
to pay in the last 1000 blocks.

A txn would say ' I'll pay up to X bitcoins ' and as long as that is more
than the value required for the block your txn can be added. This is to
ensure you don't pay more than you are willing. It also ensures that
putting an enormous fee will not ensure your txn is processed quickly..

Calculating what the outputs are given a variable fee needs a new mechanism
all of it's own, but I'm sure it's possible.

The simple fact is that there is currently no known system that works as
well as the current system..

But there are other systems.


On Dec 22, 2017 15:09, "Damian Williamson via bitcoin-dev" <
bitcoin-***@lists.linuxfoundation.org> wrote:

If the cash value of Bitcoin was high enough and zero fee transactions were
never accepted and not counted when calculating the transaction pool size
then I do not think it would be such an issue. Why is it even possible to
create zero fee transactions?


Regards,

Damian Williamson

------------------------------
*From:* bitcoin-dev-***@lists.linuxfoundation.org <
bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian
Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
*Sent:* Tuesday, 19 December 2017 6:51 PM
*To:* Mark Friedenbach
*Cc:* bitcoin-***@lists.linuxfoundation.org
*Subject:* [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction
Priority For Ordering Transactions In Blocks


Thank you for your constructive feedback. I now see that the proposal
introduces a potential issue.


>Finally in terms of the broad goal, having block size based on the number
of transactions is NOT something desirable in the first place, even if it
did work. That’s effectively the same as an infinite block size since
anyone anywhere can create transactions in the mempool at no cost.


Do you have any critical suggestion as to how transaction bandwidth limit
could be addressed, it will eventually become an issue if nothing is
changed regardless of how high fees go?


Regards,
Damian Williamson



------------------------------
*From:* Mark Friedenbach <***@friedenbach.org>
*Sent:* Tuesday, 19 December 2017 3:08 AM
*To:* Damian Williamson
*Subject:* Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
Transaction Priority For Ordering Transactions In Blocks

Damian, you seem to be misunderstanding that either

(1) the strong form of your proposal requires validating the commitment to
the mempool properties, in which case the mempool becomes consensus
critical (an impossible requirement); or

(2) in the weak form where the current block is dependent on the commitment
in the last block only it is becomes a miner-selected field they can freely
parameterize with no repercussions for setting values totally independent
of the actual mempool.

If you want to make the block size dependent on the properties of the
mempool in a consensus critical way, flex cap achieves this. If you want to
make the contents or properties of the mempool known to well-connected
nodes, weak blocks achieves that. But you can’t stick the mempool in
consensus because it fundamentally is not something the nodes have
consensus over. That’s a chicken-and-the-egg assumption.

Finally in terms of the broad goal, having block size based on the number
of transactions is NOT something desirable in the first place, even if it
did work. That’s effectively the same as an infinite block size since
anyone anywhere can create transactions in the mempool at no cost.

On Dec 16, 2017, at 8:14 PM, Damian Williamson via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

I do not know why people make the leap that the proposal requires a
consensus on the transaction pool. It does not.

It may be helpful to have the discussion from the previous thread linked
here.
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017
-December/015370.html

Where I speak of validating that a block conforms to the broadcast next
block size, I do not propose validating the number broadcast for the next
block size itself, only that the next generated block is that size.

Regards,
Damian Williamson


------------------------------
*From:* Damian Williamson <***@live.com.au>
*Sent:* Saturday, 16 December 2017 7:59 AM
*To:* Rhavar
*Cc:* Bitcoin Protocol Discussion
*Subject:* Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
Transaction Priority For Ordering Transactions In Blocks

There are really two separate problems to solve.


1. How does Bitcoin scale with fixed block size?
2. How do we ensure that all valid transactions are eventually included
in the blockchain?


Those are the two issues that the proposal attempts to address. It makes
sense to resolve these two problems together. Using the proposed system for
variable block sizes would solve the first problem but there would still be
a whole bunch of never confirming transactions. I am not sure how to
reliably solve the second problem at scale without first solving the first.

>* Every node has a (potentially) different mempool, you can't use it to
decide consensus values like the max block size.

I do not suggest a consensus. Depending on which node solves a block the
value for next block size will be different. The consensus would be that
blocks will adhere to the next block size value transmitted with the
current block. It is easy to verify that the consensus is being adhered to
once in place.

>* Increasing the entropy in a block to make it more unpredictable doesn't
really make sense.

Not a necessary function, just an effect of using a probability-based
distribution.

>* Bitcoin should be roughly incentive compatible. Your proposal explicits
asks miners to ignore their best interests, and confirm transactions by
"priority". What are you going to do if a "malicious" miner decides to go
after their profits and order by what makes them the most money. Add
"ordered by priority" as a consensus requirement? And even if you miners
can still sort their mempool by fee, and then order the top 1MB by priority.

I entirely agree with your sentiment that Bitcoin must be incentive
compatible. It is necessary.

It is in only miners immediate interest to make the most profitable block
from the available transaction pool. As with so many other things, it is
necessary to partially ignore short-term gain for long-term benefit. It is
in miners and everybody's long-term interest to have a reliable transaction
service. A busy transaction service that confirms lots of transactions per
hour will become more profitable as demand increases and more users are
prepared to pay for priority. As it is there is currently no way to fully
scale because of the transaction bandwidth limit and that is problematic.
If all valid transactions must eventually confirm then there must be a way
to resolve that problem.

Bitcoin deliberately removes traditional scale by ensuring blocks take ten
minutes on average to solve, an ingenious idea and, incentive compatible
but, fixed block sizes leaves us with a problem to solve when we want to
scale.

>If you could find a good solution that would allow you to know if miners
were following your rule or not (and thus ignore it if it doesn't) then you
wouldn't even need bitcoin in the first place.

I am confident that the math to verify blocks based on the proposal can be
developed (and I think it will not be too complex for a mathematician with
the relevant experience), however, I am nowhere near experienced enough
with probability and statistical analysis to do it. Yes, if Bitcoin doesn't
then it might make another great opportunity for an altcoin but I am not
even nearly interested in promoting any altcoins.


If not the proposal that I have put forward, then, hopefully, someone can
come up with a better solution. The important thing is that the issues are
resolved.

Regards,
Damian Williamson


------------------------------
*From:* Rhavar <***@protonmail.com>
*Sent:* Saturday, 16 December 2017 3:38 AM
*To:* Damian Williamson
*Cc:* Bitcoin Protocol Discussion
*Subject:* Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
Transaction Priority For Ordering Transactions In Blocks

> I understand that there would be technical issues to resolve in
implementation, but, are there no fundamental errors?

Unfortunately your proposal is really fundamentally broken, on a few
levels. I think you might need to do a bit more research into how bitcoin
works before coming up with such improvements =)

But just some quick notes:

* Every node has a (potentially) different mempool, you can't use it to
decide consensus values like the max block size.

* Increasing the entropy in a block to make it more unpredictable doesn't
really make sense.

* Bitcoin should be roughly incentive compatible. Your proposal explicits
asks miners to ignore their best interests, and confirm transactions by
"priority". What are you going to do if a "malicious" miner decides to go
after their profits and order by what makes them the most money. Add
"ordered by priority" as a consensus requirement? And even if you miners
can still sort their mempool by fee, and then order the top 1MB by priority.

If you could find a good solution that would allow you to know if miners
were following your rule or not (and thus ignore it if it doesn't) then you
wouldn't even need bitcoin in the first place.




-Ryan


-------- Original Message --------
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction
Priority For Ordering Transactions In Blocks
Local Time: December 15, 2017 3:42 AM
UTC Time: December 15, 2017 9:42 AM
From: bitcoin-***@lists.linuxfoundation.org
To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>



I should not take it that the lack of critical feedback to this revised
proposal is a glowing endorsement. I understand that there would be
technical issues to resolve in implementation, but, are there no
fundamental errors?

I suppose that it if is difficult to determine how long a transaction has
been waiting in the pool then, each node could simply keep track of when a
transaction was first seen. This may have implications for a verify
routine, however, for example, if a node was offline, how should it
differentiate how long each transaction was waiting in that case? If a node
was restarted daily would it always think that all transactions had been
waiting in the pool less than one day If each node keeps the current
transaction pool in a file and updates it, as transactions are included in
blocks and, as new transactions appear in the pool, then that would go some
way to alleviate the issue, apart from entirely new nodes. There should be
no reason the contents of a transaction pool files cannot be shared without
agreement as to the transaction pool between nodes, just as nodes transmit
new transactions freely.

It has been questioned why miners could not cheat. For the question of how
many transactions to include in a block, I say it is a standoff and miners
will conform to the proposal, not wanting to leave transactions with valid
fees standing, and, not wanting to shrink the transaction pool. In any
case, if miners shrink the transaction pool then I am not immediately
concerned since it provides a more efficient service. For the question of
including transactions according to the proposal, I say if it is possible
to keep track of how long transactions are waiting in the pool so that they
can be included on a probability curve then it is possible to verify that
blocks conform to the proposal, since the input is a probability, the
output should conform to a probability curve.


If someone has the necessary skill, would anyone be willing to develop the
math necessary for the proposal?

Regards,
Damian Williamson


------------------------------

*From:* bitcoin-dev-***@lists.linuxfoundation.org <bitcoin
-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via
bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
*Sent:* Friday, 8 December 2017 8:01 AM
*To:* bitcoin-***@lists.linuxfoundation.org
*Subject:* [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction
Priority For Ordering Transactions In Blocks


Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a
serious issue for the operation, reliability, safety, consumer acceptance,
uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target
confirmation from the fee recommendation. That transaction has still not
confirmed after now more than six days - even waiting twice as long seems
quite reasonable to me. That transaction is a valid transaction; it is not
rubbish, junk or, spam. Under the current model with transaction bandwidth
limitation, the longer a transaction waits, the less likely it is ever to
confirm due to rising transaction numbers and being pushed back by
transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee
transactions might be spam. Having an ever-increasing number of valid
transactions that do not confirm as more new transactions with higher fees
are created is the opposite of operating a robust, reliable transaction
system.

Business cannot operate with a model where transactions may or may not
confirm. Even a business choosing a modest fee has no guarantee that their
valid transaction will not be shuffled down by new transactions to the
realm of never confirming after it is created. Consumers also will not
accept this model as Bitcoin expands. If Bitcoin cannot be a reliable
payment system for confirmed transactions then consumers, by and large,
will simply not accept the model once they understand. Bitcoin will be a
dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the
lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every
ten minutes, only those choosing twenty-minute confirmation (2 blocks) will
have initially at most a fifty percent chance of ever having their payment
confirm. Presently, not even using fee recommendations can ensure a
sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction
bandwidth is wrong, is not suitable for a reliable transaction system and,
is wrong for Bitcoin. All transactions must confirm in due time. Currently,
Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even
high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of
Bitcoin. The time to resolve issues in commerce is before they become great
big issues. The time to resolve this issue is now. We must have the
foresight to identify and resolve problems before they trip us over.
Simply doubling block sizes every so often is reactionary and is not a
reliable permanent solution. I have written a BIP proposal for a technical
solution but, need your help to write it up to an acceptable standard to be
a full BIP.

I have formatted the following with markdown which is human readable so, I
hope nobody minds. I have done as much with this proposal as I feel that I
am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering
Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we
presume, to minimize block size). Consumers need transaction reliability
and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As
the operational safety of transactions is limited, so is consumer
confidence as they realize the issue and, accordingly, uptake is limited.
Fees are artificially inflated due to bandwidth limitations while failing
to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction
reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full
transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is
acceptable to allow eventual transaction confirmation should be removed
from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time
before choosing transactions to include in the current block, the priority
being a function of the fee paid (on a curve), and the time waiting in the
transaction pool (also on a curve) out to n days (n=60 ?). The transaction
priority to serve as the likelihood of a transaction being included in the
current block, and for determining the order in which transactions are
tried to see if they will be included.

Use a target block size. Determine the target block size using; current
transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to
be included in the current block. Broadcast the next target block size with
the current block when it is solved so that nodes know the next target
block size for the block that they are building on.

The curves used for the priority of transactions would have to be
appropriate. Perhaps a mathematician with experience in probability can
develop the right formulae. My thinking is a steep curve. I suppose that
the probability of all transactions should probably account for a
sufficient number of inclusions that the target block size is met although,
it may not always be. As a suggestion, consider including some zero fee
transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and
one-hundred (high) it can be directly understood as the percentage chance
in one-hundred of a transaction being included in the block. Using
probability or likelihood infers that there is some function of random. If
random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time
waiting on a curve value are each a number between one and one-hundred, a
rudimentary method may be to simply multiply those two numbers, to find the
priority number. For example, a middle fee transaction waiting thirty days
(if n = 60 days) may have a value of five for each part (yes, just five,
the values are on a curve). When multiplied that will give a priority value
of twenty-five, or, a twenty-five percent chance at that moment of being
included in the block; it will likely be included in one of the next four
blocks, getting more likely each chance. If it is still not included then
the value of time waiting will be higher, making for more probability. A
very low fee transaction would have a value for the fee of one. It would
not be until near sixty-days that the particular low fee transaction has a
high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason
for addressing the issue is to ensure transactional reliability and
scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because
of reliability, in time confidence and overall uptake are greater;
therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less
probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need
to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in
transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is
solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that
the probability for all transactions included in the block statistically
conforms to a probability distribution curve, *if* the individual
transaction priority can be recreated. I am not that deep into the
mathematics; however, it may also be possible to use a similar method to do
this just based on the fee, that statistically, the blocks conform to a fee
distribution. Any zero fee transactions would have to be ignored. This
solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full
transaction reliability and enables scalability of block sizes. If not this
proposal, an alternative.

Regards,
Damian Williamson
Damian Williamson via bitcoin-dev
2017-12-23 01:24:28 UTC
Permalink
I suppose what I intended is (2) the weak form but, what is essentially needed is (1) the strong form. The answer may be somewhere in-between.


I do not see that an entire consensus for the mempool is needed, each node just needs a loose understanding of the average number of non-zero fee transactions in the mempool.


As a pre-rollout, it would be possible to give each node a serial ID and, calculate the average number of non-zero fee transactions from the information it has and, say every ten minutes, distribute information it has about the number of transactions in the mempool. Each node would be able to form its own picture of the average number of non-zero fee transactions in the mempool.


At rollout, this information would be the basis a node would use when a block is solved to provide the next expected block size. This would still not stop cheating by providing especially a number lower than the proposal would allow for, to game the system and hike fees. If miners will not act in the long-term interest of the stability and operation of the system then they should be ignored. If most miners will adhere to the proposal then the average effect would be stability in the operation of the proposal, having a few or even several nodes posting low numbers for the number of transactions expected in the next expected block size would not destroy the operation. If some node posted an insanely high number for next expected block size that resulted in the mempool being emptied then the proposal would be offended but I do not actually care. If no number is posted, just create a block the appropriate size ensure conformity. Nodes that have not adopted the proposal could just continue to create 1MB blocks.


Actually, the operation could be simplified using the distributed information directly to just create blocks of the appropriate size with no need to provide next block size. Flexible block size.


The proposal should also specify a minimum number of transactions to include for the next block to give at a minimum a 1MB block.


I currently have no information on flex cap, do you have a link?


Regards,

Damian Williamson


________________________________
From: Mark Friedenbach <***@friedenbach.org>
Sent: Tuesday, 19 December 2017 3:08 AM
To: Damian Williamson
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Damian, you seem to be misunderstanding that either

(1) the strong form of your proposal requires validating the commitment to the mempool properties, in which case the mempool becomes consensus critical (an impossible requirement); or

(2) in the weak form where the current block is dependent on the commitment in the last block only it is becomes a miner-selected field they can freely parameterize with no repercussions for setting values totally independent of the actual mempool.

If you want to make the block size dependent on the properties of the mempool in a consensus critical way, flex cap achieves this. If you want to make the contents or properties of the mempool known to well-connected nodes, weak blocks achieves that. But you canÂ’t stick the mempool in consensus because it fundamentally is not something the nodes have consensus over. ThatÂ’s a chicken-and-the-egg assumption.

Finally in terms of the broad goal, having block size based on the number of transactions is NOT something desirable in the first place, even if it did work. ThatÂ’s effectively the same as an infinite block size since anyone anywhere can create transactions in the mempool at no cost.

On Dec 16, 2017, at 8:14 PM, Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:

I do not know why people make the leap that the proposal requires a consensus on the transaction pool. It does not.

It may be helpful to have the discussion from the previous thread linked here.
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015370.html

Where I speak of validating that a block conforms to the broadcast next block size, I do not propose validating the number broadcast for the next block size itself, only that the next generated block is that size.

Regards,
Damian Williamson


________________________________
From: Damian Williamson <***@live.com.au<mailto:***@live.com.au>>
Sent: Saturday, 16 December 2017 7:59 AM
To: Rhavar
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

There are really two separate problems to solve.


1. How does Bitcoin scale with fixed block size?
2. How do we ensure that all valid transactions are eventually included in the blockchain?

Those are the two issues that the proposal attempts to address. It makes sense to resolve these two problems together. Using the proposed system for variable block sizes would solve the first problem but there would still be a whole bunch of never confirming transactions. I am not sure how to reliably solve the second problem at scale without first solving the first.

>* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

I do not suggest a consensus. Depending on which node solves a block the value for next block size will be different. The consensus would be that blocks will adhere to the next block size value transmitted with the current block. It is easy to verify that the consensus is being adhered to once in place.

>* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

Not a necessary function, just an effect of using a probability-based distribution.

>* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

I entirely agree with your sentiment that Bitcoin must be incentive compatible. It is necessary.

It is in only miners immediate interest to make the most profitable block from the available transaction pool. As with so many other things, it is necessary to partially ignore short-term gain for long-term benefit. It is in miners and everybody's long-term interest to have a reliable transaction service. A busy transaction service that confirms lots of transactions per hour will become more profitable as demand increases and more users are prepared to pay for priority. As it is there is currently no way to fully scale because of the transaction bandwidth limit and that is problematic. If all valid transactions must eventually confirm then there must be a way to resolve that problem.

Bitcoin deliberately removes traditional scale by ensuring blocks take ten minutes on average to solve, an ingenious idea and, incentive compatible but, fixed block sizes leaves us with a problem to solve when we want to scale.

>If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.

I am confident that the math to verify blocks based on the proposal can be developed (and I think it will not be too complex for a mathematician with the relevant experience), however, I am nowhere near experienced enough with probability and statistical analysis to do it. Yes, if Bitcoin doesn't then it might make another great opportunity for an altcoin but I am not even nearly interested in promoting any altcoins.

If not the proposal that I have put forward, then, hopefully, someone can come up with a better solution. The important thing is that the issues are resolved.

Regards,
Damian Williamson


________________________________
From: Rhavar <***@protonmail.com<mailto:***@protonmail.com>>
Sent: Saturday, 16 December 2017 3:38 AM
To: Damian Williamson
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

> I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

Unfortunately your proposal is really fundamentally broken, on a few levels. I think you might need to do a bit more research into how bitcoin works before coming up with such improvements =)

But just some quick notes:

* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.

* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.

* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.

If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.




-Ryan


-------- Original Message --------
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
Local Time: December 15, 2017 3:42 AM
UTC Time: December 15, 2017 9:42 AM
From: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>



I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?

I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.

It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.


If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?

Regards,
Damian Williamson


________________________________

From: bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org> <bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org>> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
Sent: Friday, 8 December 2017 8:01 AM
To: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


Good afternoon,

The need for this proposal:

We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.

I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.

Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.

Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.

I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.

I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.

I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.

# BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

## The problem:
Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.

Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.

Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.

## Solution summary:
Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.

Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.

The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?

**Explanation of the operation of priority:**
> If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.

>To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.

I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.

## Pros:
* Maximizes transaction reliability.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is less probability of predicting the next block.

## Cons:
* Could initially lower total transaction fees per block.
* Must be first be programmed.

## Solution operation:
This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each transaction in the pool.
3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
5. Solve block.
6. Broadcast the next target block size with the current block when it is solved.
7. Block is received.
8. Block verification process.
9. Accept/reject block based on verification result.
10. Repeat.

## Closing comments:
It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.

I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.

Regards,
Damian Williamson
Chris Riley via bitcoin-dev
2017-12-18 12:09:34 UTC
Permalink
Regarding "problem" #2 where you say "How do we ensure that all valid
transactions are eventually included in the blockchain?": I do not believe
that all people would (a) agree this is a problem or (b) that we do want to
*ENSURE* that *ALL* valid transactions are eventually included in the
blockchain. There are many *valid* transactions that oftentimes miners do
not (and should not) wish to require be confirmed and included in the
blockchain. Spam transactions for example can be valid, but used to attack
bitcoin by using no or low fee. Any valid transaction MAY be included by a
miner, but requiring it in some fashion at this point would open the
network to other attack vectors. Perhaps you meant it a different way.


On Fri, Dec 15, 2017 at 3:59 PM, Damian Williamson via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:
>
> There are really two separate problems to solve.
>
>
> How does Bitcoin scale with fixed block size?
> How do we ensure that all valid transactions are eventually included in
the blockchain?
>
>
> Those are the two issues that the proposal attempts to address. It makes
sense to resolve these two problems together. Using the proposed system for
variable block sizes would solve the first problem but there would still be
a whole bunch of never confirming transactions. I am not sure how to
reliably solve the second problem at scale without first solving the first.
>
>
> >* Every node has a (potentially) different mempool, you can't use it to
decide consensus values like the max block size.
>
>
> I do not suggest a consensus. Depending on which node solves a block the
value for next block size will be different. The consensus would be that
blocks will adhere to the next block size value transmitted with the
current block. It is easy to verify that the consensus is being adhered to
once in place.
>
> >* Increasing the entropy in a block to make it more unpredictable
doesn't really make sense.
>
> Not a necessary function, just an effect of using a probability-based
distribution.
>
> >* Bitcoin should be roughly incentive compatible. Your proposal
explicits asks miners to ignore their best interests, and confirm
transactions by "priority". What are you going to do if a "malicious"
miner decides to go after their profits and order by what makes them the
most money. Add "ordered by priority" as a consensus requirement? And even
if you miners can still sort their mempool by fee, and then order the top
1MB by priority.
>
> I entirely agree with your sentiment that Bitcoin must be incentive
compatible. It is necessary.
>
> It is in only miners immediate interest to make the most profitable block
from the available transaction pool. As with so many other things, it is
necessary to partially ignore short-term gain for long-term benefit. It is
in miners and everybody's long-term interest to have a reliable transaction
service. A busy transaction service that confirms lots of transactions per
hour will become more profitable as demand increases and more users are
prepared to pay for priority. As it is there is currently no way to fully
scale because of the transaction bandwidth limit and that is problematic.
If all valid transactions must eventually confirm then there must be a way
to resolve that problem.
>
> Bitcoin deliberately removes traditional scale by ensuring blocks take
ten minutes on average to solve, an ingenious idea and, incentive
compatible but, fixed block sizes leaves us with a problem to solve when we
want to scale.
>
> >If you could find a good solution that would allow you to know if miners
were following your rule or not (and thus ignore it if it doesn't) then you
wouldn't even need bitcoin in the first place.
>
> I am confident that the math to verify blocks based on the proposal can
be developed (and I think it will not be too complex for a mathematician
with the relevant experience), however, I am nowhere near experienced
enough with probability and statistical analysis to do it. Yes, if Bitcoin
doesn't then it might make another great opportunity for an altcoin but I
am not even nearly interested in promoting any altcoins.
>
>
> If not the proposal that I have put forward, then, hopefully, someone can
come up with a better solution. The important thing is that the issues are
resolved.
>
>
> Regards,
>
> Damian Williamson
>
>
>
> ________________________________
> From: Rhavar <***@protonmail.com>
> Sent: Saturday, 16 December 2017 3:38 AM
> To: Damian Williamson
> Cc: Bitcoin Protocol Discussion
> Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
Transaction Priority For Ordering Transactions In Blocks
>
> > I understand that there would be technical issues to resolve in
implementation, but, are there no fundamental errors?
>
> Unfortunately your proposal is really fundamentally broken, on a few
levels. I think you might need to do a bit more research into how bitcoin
works before coming up with such improvements =)
>
> But just some quick notes:
>
> * Every node has a (potentially) different mempool, you can't use it to
decide consensus values like the max block size.
>
> * Increasing the entropy in a block to make it more unpredictable doesn't
really make sense.
>
> * Bitcoin should be roughly incentive compatible. Your proposal explicits
asks miners to ignore their best interests, and confirm transactions by
"priority". What are you going to do if a "malicious" miner decides to go
after their profits and order by what makes them the most money. Add
"ordered by priority" as a consensus requirement? And even if you miners
can still sort their mempool by fee, and then order the top 1MB by priority.
>
> If you could find a good solution that would allow you to know if miners
were following your rule or not (and thus ignore it if it doesn't) then you
wouldn't even need bitcoin in the first place.
>
>
>
>
> -Ryan
>
>
> -------- Original Message --------
> Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction
Priority For Ordering Transactions In Blocks
> Local Time: December 15, 2017 3:42 AM
> UTC Time: December 15, 2017 9:42 AM
> From: bitcoin-***@lists.linuxfoundation.org
> To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>
>
>
>
> I should not take it that the lack of critical feedback to this revised
proposal is a glowing endorsement. I understand that there would be
technical issues to resolve in implementation, but, are there no
fundamental errors?
>
> I suppose that it if is difficult to determine how long a transaction has
been waiting in the pool then, each node could simply keep track of when a
transaction was first seen. This may have implications for a verify
routine, however, for example, if a node was offline, how should it
differentiate how long each transaction was waiting in that case? If a node
was restarted daily would it always think that all transactions had been
waiting in the pool less than one day If each node keeps the current
transaction pool in a file and updates it, as transactions are included in
blocks and, as new transactions appear in the pool, then that would go some
way to alleviate the issue, apart from entirely new nodes. There should be
no reason the contents of a transaction pool files cannot be shared without
agreement as to the transaction pool between nodes, just as nodes transmit
new transactions freely.
>
> It has been questioned why miners could not cheat. For the question of
how many transactions to include in a block, I say it is a standoff and
miners will conform to the proposal, not wanting to leave transactions with
valid fees standing, and, not wanting to shrink the transaction pool. In
any case, if miners shrink the transaction pool then I am not immediately
concerned since it provides a more efficient service. For the question of
including transactions according to the proposal, I say if it is possible
to keep track of how long transactions are waiting in the pool so that they
can be included on a probability curve then it is possible to verify that
blocks conform to the proposal, since the input is a probability, the
output should conform to a probability curve.
>
>
> If someone has the necessary skill, would anyone be willing to develop
the math necessary for the proposal?
>
> Regards,
> Damian Williamson
>
>
> ________________________________
>
> From: bitcoin-dev-***@lists.linuxfoundation.org <
bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian
Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
> Sent: Friday, 8 December 2017 8:01 AM
> To: bitcoin-***@lists.linuxfoundation.org
> Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction
Priority For Ordering Transactions In Blocks
>
>
>
> Good afternoon,
>
> The need for this proposal:
>
> We all must learn to admit that transaction bandwidth is still lurking as
a serious issue for the operation, reliability, safety, consumer
acceptance, uptake and, for the value of Bitcoin.
>
> I recently sent a payment which was not urgent so; I chose three-day
target confirmation from the fee recommendation. That transaction has still
not confirmed after now more than six days - even waiting twice as long
seems quite reasonable to me. That transaction is a valid transaction; it
is not rubbish, junk or, spam. Under the current model with transaction
bandwidth limitation, the longer a transaction waits, the less likely it is
ever to confirm due to rising transaction numbers and being pushed back by
transactions with rising fees.
>
> I argue that no transactions are rubbish or junk, only some zero fee
transactions might be spam. Having an ever-increasing number of valid
transactions that do not confirm as more new transactions with higher fees
are created is the opposite of operating a robust, reliable transaction
system.
>
> Business cannot operate with a model where transactions may or may not
confirm. Even a business choosing a modest fee has no guarantee that their
valid transaction will not be shuffled down by new transactions to the
realm of never confirming after it is created. Consumers also will not
accept this model as Bitcoin expands. If Bitcoin cannot be a reliable
payment system for confirmed transactions then consumers, by and large,
will simply not accept the model once they understand. Bitcoin will be a
dirty payment system, and this will kill the value of Bitcoin.
>
> Under the current system, a minority of transactions will eventually be
the lucky few who have fees high enough to escape being pushed down the
list.
>
> Once there are more than x transactions (transaction bandwidth limit)
every ten minutes, only those choosing twenty-minute confirmation (2
blocks) will have initially at most a fifty percent chance of ever having
their payment confirm. Presently, not even using fee recommendations can
ensure a sufficiently high fee is paid to ensure transaction confirmation.
>
> I also argue that the current auction model for limited transaction
bandwidth is wrong, is not suitable for a reliable transaction system and,
is wrong for Bitcoin. All transactions must confirm in due time. Currently,
Bitcoin is not a safe way to send payments.
>
> I do not believe that consumers and business are against paying fees,
even high fees. What is required is operational reliability.
>
> This great issue needs to be resolved for the safety and reliability of
Bitcoin. The time to resolve issues in commerce is before they become great
big issues. The time to resolve this issue is now. We must have the
foresight to identify and resolve problems before they trip us over.
Simply doubling block sizes every so often is reactionary and is not a
reliable permanent solution. I have written a BIP proposal for a technical
solution but, need your help to write it up to an acceptable standard to be
a full BIP.
>
> I have formatted the following with markdown which is human readable so,
I hope nobody minds. I have done as much with this proposal as I feel that
I am able so far but continue to take your feedback.
>
> # BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering
Transactions In Blocks
>
> ## The problem:
> Everybody wants value. Miners want to maximize revenue from fees (and we
presume, to minimize block size). Consumers need transaction reliability
and, (we presume) want low fees.
>
> The current transaction bandwidth limit is a limiting factor for both. As
the operational safety of transactions is limited, so is consumer
confidence as they realize the issue and, accordingly, uptake is limited.
Fees are artificially inflated due to bandwidth limitations while failing
to provide a full confirmation service for all transactions.
>
> Current fee recommendations provide no satisfaction for transaction
reliability and, as Bitcoin scales, this will worsen.
>
> Bitcoin must be a fully scalable and reliable service, providing full
transaction confirmation for every valid transaction.
>
> The possibility to send a transaction with a fee lower than one that is
acceptable to allow eventual transaction confirmation should be removed
from the protocol and also from the user interface.
>
> ## Solution summary:
> Provide each transaction with an individual transaction priority each
time before choosing transactions to include in the current block, the
priority being a function of the fee paid (on a curve), and the time
waiting in the transaction pool (also on a curve) out to n days (n=60 ?).
The transaction priority to serve as the likelihood of a transaction being
included in the current block, and for determining the order in which
transactions are tried to see if they will be included.
>
> Use a target block size. Determine the target block size using; current
transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to
be included in the current block. Broadcast the next target block size with
the current block when it is solved so that nodes know the next target
block size for the block that they are building on.
>
> The curves used for the priority of transactions would have to be
appropriate. Perhaps a mathematician with experience in probability can
develop the right formulae. My thinking is a steep curve. I suppose that
the probability of all transactions should probably account for a
sufficient number of inclusions that the target block size is met although,
it may not always be. As a suggestion, consider including some zero fee
transactions to pad, highest BTC value first?
>
> **Explanation of the operation of priority:**
> > If transaction priority is, for example, a number between one (low) and
one-hundred (high) it can be directly understood as the percentage chance
in one-hundred of a transaction being included in the block. Using
probability or likelihood infers that there is some function of random. If
random (100) < transaction priority then the transaction is included.
>
> >To break it down further, if both the fee on a curve value and the time
waiting on a curve value are each a number between one and one-hundred, a
rudimentary method may be to simply multiply those two numbers, to find the
priority number. For example, a middle fee transaction waiting thirty days
(if n = 60 days) may have a value of five for each part (yes, just five,
the values are on a curve). When multiplied that will give a priority value
of twenty-five, or, a twenty-five percent chance at that moment of being
included in the block; it will likely be included in one of the next four
blocks, getting more likely each chance. If it is still not included then
the value of time waiting will be higher, making for more probability. A
very low fee transaction would have a value for the fee of one. It would
not be until near sixty-days that the particular low fee transaction has a
high likelihood of being included in the block.
>
> I am not concerned with low (or high) transaction fees, the primary
reason for addressing the issue is to ensure transactional reliability and
scalability while having each transaction confirm in due time.
>
> ## Pros:
> * Maximizes transaction reliability.
> * Fully scalable.
> * Maximizes possibility for consumer and business uptake.
> * Maximizes total fees paid per block without reducing reliability;
because of reliability, in time confidence and overall uptake are greater;
therefore, more transactions.
> * Market determines fee paid for transaction priority.
> * Fee recommendations work all the way out to 30 days or greater.
> * Provides additional block entropy; greater security since there is less
probability of predicting the next block.
>
> ## Cons:
> * Could initially lower total transaction fees per block.
> * Must be first be programmed.
>
> ## Solution operation:
> This is a simplistic view of the operation. The actual operation will
need to be determined in a spec for the programmer.
>
> 1. Determine the target block size for the current block.
> 2. Assign a transaction priority to each transaction in the pool.
> 3. Select transactions to include in the current block using probability
in transaction priority order until the target block size is met.
> 5. Solve block.
> 6. Broadcast the next target block size with the current block when it is
solved.
> 7. Block is received.
> 8. Block verification process.
> 9. Accept/reject block based on verification result.
> 10. Repeat.
>
> ## Closing comments:
> It may be possible to verify blocks conform to the proposal by showing
that the probability for all transactions included in the block
statistically conforms to a probability distribution curve, *if* the
individual transaction priority can be recreated. I am not that deep into
the mathematics; however, it may also be possible to use a similar method
to do this just based on the fee, that statistically, the blocks conform to
a fee distribution. Any zero fee transactions would have to be ignored.
This solution needs a clever mathematician.
>
> I implore, at the very least, that we use some method that validates full
transaction reliability and enables scalability of block sizes. If not this
proposal, an alternative.
>
> Regards,
> Damian Williamson
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Damian Williamson via bitcoin-dev
2017-12-19 07:48:37 UTC
Permalink
Thank you for your constructive feedback. I now see that the proposal introduces a potential issue.


It is difficult to define then, what is a valid transaction? Clearly, my definition was insufficient.


Regards,

Damian Williamson


________________________________
From: Chris Riley <***@gmail.com>
Sent: Monday, 18 December 2017 11:09 PM
To: Damian Williamson; Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Regarding "problem" #2 where you say "How do we ensure that all valid transactions are eventually included in the blockchain?": I do not believe that all people would (a) agree this is a problem or (b) that we do want to *ENSURE* that *ALL* valid transactions are eventually included in the blockchain. There are many *valid* transactions that oftentimes miners do not (and should not) wish to require be confirmed and included in the blockchain. Spam transactions for example can be valid, but used to attack bitcoin by using no or low fee. Any valid transaction MAY be included by a miner, but requiring it in some fashion at this point would open the network to other attack vectors. Perhaps you meant it a different way.


On Fri, Dec 15, 2017 at 3:59 PM, Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
>
> There are really two separate problems to solve.
>
>
> How does Bitcoin scale with fixed block size?
> How do we ensure that all valid transactions are eventually included in the blockchain?
>
>
> Those are the two issues that the proposal attempts to address. It makes sense to resolve these two problems together. Using the proposed system for variable block sizes would solve the first problem but there would still be a whole bunch of never confirming transactions. I am not sure how to reliably solve the second problem at scale without first solving the first.
>
>
> >* Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.
>
>
> I do not suggest a consensus. Depending on which node solves a block the value for next block size will be different. The consensus would be that blocks will adhere to the next block size value transmitted with the current block. It is easy to verify that the consensus is being adhered to once in place.
>
> >* Increasing the entropy in a block to make it more unpredictable doesn't really make sense.
>
> Not a necessary function, just an effect of using a probability-based distribution.
>
> >* Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.
>
> I entirely agree with your sentiment that Bitcoin must be incentive compatible. It is necessary.
>
> It is in only miners immediate interest to make the most profitable block from the available transaction pool. As with so many other things, it is necessary to partially ignore short-term gain for long-term benefit. It is in miners and everybody's long-term interest to have a reliable transaction service. A busy transaction service that confirms lots of transactions per hour will become more profitable as demand increases and more users are prepared to pay for priority. As it is there is currently no way to fully scale because of the transaction bandwidth limit and that is problematic. If all valid transactions must eventually confirm then there must be a way to resolve that problem.
>
> Bitcoin deliberately removes traditional scale by ensuring blocks take ten minutes on average to solve, an ingenious idea and, incentive compatible but, fixed block sizes leaves us with a problem to solve when we want to scale.
>
> >If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.
>
> I am confident that the math to verify blocks based on the proposal can be developed (and I think it will not be too complex for a mathematician with the relevant experience), however, I am nowhere near experienced enough with probability and statistical analysis to do it. Yes, if Bitcoin doesn't then it might make another great opportunity for an altcoin but I am not even nearly interested in promoting any altcoins.
>
>
> If not the proposal that I have put forward, then, hopefully, someone can come up with a better solution. The important thing is that the issues are resolved.
>
>
> Regards,
>
> Damian Williamson
>
>
>
> ________________________________
> From: Rhavar <***@protonmail.com<mailto:***@protonmail.com>>
> Sent: Saturday, 16 December 2017 3:38 AM
> To: Damian Williamson
> Cc: Bitcoin Protocol Discussion
> Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
>
> > I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?
>
> Unfortunately your proposal is really fundamentally broken, on a few levels. I think you might need to do a bit more research into how bitcoin works before coming up with such improvements =)
>
> But just some quick notes:
>
> * Every node has a (potentially) different mempool, you can't use it to decide consensus values like the max block size.
>
> * Increasing the entropy in a block to make it more unpredictable doesn't really make sense.
>
> * Bitcoin should be roughly incentive compatible. Your proposal explicits asks miners to ignore their best interests, and confirm transactions by "priority". What are you going to do if a "malicious" miner decides to go after their profits and order by what makes them the most money. Add "ordered by priority" as a consensus requirement? And even if you miners can still sort their mempool by fee, and then order the top 1MB by priority.
>
> If you could find a good solution that would allow you to know if miners were following your rule or not (and thus ignore it if it doesn't) then you wouldn't even need bitcoin in the first place.
>
>
>
>
> -Ryan
>
>
> -------- Original Message --------
> Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
> Local Time: December 15, 2017 3:42 AM
> UTC Time: December 15, 2017 9:42 AM
> From: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
> To: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
>
>
>
> I should not take it that the lack of critical feedback to this revised proposal is a glowing endorsement. I understand that there would be technical issues to resolve in implementation, but, are there no fundamental errors?
>
> I suppose that it if is difficult to determine how long a transaction has been waiting in the pool then, each node could simply keep track of when a transaction was first seen. This may have implications for a verify routine, however, for example, if a node was offline, how should it differentiate how long each transaction was waiting in that case? If a node was restarted daily would it always think that all transactions had been waiting in the pool less than one day If each node keeps the current transaction pool in a file and updates it, as transactions are included in blocks and, as new transactions appear in the pool, then that would go some way to alleviate the issue, apart from entirely new nodes. There should be no reason the contents of a transaction pool files cannot be shared without agreement as to the transaction pool between nodes, just as nodes transmit new transactions freely.
>
> It has been questioned why miners could not cheat. For the question of how many transactions to include in a block, I say it is a standoff and miners will conform to the proposal, not wanting to leave transactions with valid fees standing, and, not wanting to shrink the transaction pool. In any case, if miners shrink the transaction pool then I am not immediately concerned since it provides a more efficient service. For the question of including transactions according to the proposal, I say if it is possible to keep track of how long transactions are waiting in the pool so that they can be included on a probability curve then it is possible to verify that blocks conform to the proposal, since the input is a probability, the output should conform to a probability curve.
>
>
> If someone has the necessary skill, would anyone be willing to develop the math necessary for the proposal?
>
> Regards,
> Damian Williamson
>
>
> ________________________________
>
> From: bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org> <bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org>> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
> Sent: Friday, 8 December 2017 8:01 AM
> To: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
> Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
>
>
>
> Good afternoon,
>
> The need for this proposal:
>
> We all must learn to admit that transaction bandwidth is still lurking as a serious issue for the operation, reliability, safety, consumer acceptance, uptake and, for the value of Bitcoin.
>
> I recently sent a payment which was not urgent so; I chose three-day target confirmation from the fee recommendation. That transaction has still not confirmed after now more than six days - even waiting twice as long seems quite reasonable to me. That transaction is a valid transaction; it is not rubbish, junk or, spam. Under the current model with transaction bandwidth limitation, the longer a transaction waits, the less likely it is ever to confirm due to rising transaction numbers and being pushed back by transactions with rising fees.
>
> I argue that no transactions are rubbish or junk, only some zero fee transactions might be spam. Having an ever-increasing number of valid transactions that do not confirm as more new transactions with higher fees are created is the opposite of operating a robust, reliable transaction system.
>
> Business cannot operate with a model where transactions may or may not confirm. Even a business choosing a modest fee has no guarantee that their valid transaction will not be shuffled down by new transactions to the realm of never confirming after it is created. Consumers also will not accept this model as Bitcoin expands. If Bitcoin cannot be a reliable payment system for confirmed transactions then consumers, by and large, will simply not accept the model once they understand. Bitcoin will be a dirty payment system, and this will kill the value of Bitcoin.
>
> Under the current system, a minority of transactions will eventually be the lucky few who have fees high enough to escape being pushed down the list.
>
> Once there are more than x transactions (transaction bandwidth limit) every ten minutes, only those choosing twenty-minute confirmation (2 blocks) will have initially at most a fifty percent chance of ever having their payment confirm. Presently, not even using fee recommendations can ensure a sufficiently high fee is paid to ensure transaction confirmation.
>
> I also argue that the current auction model for limited transaction bandwidth is wrong, is not suitable for a reliable transaction system and, is wrong for Bitcoin. All transactions must confirm in due time. Currently, Bitcoin is not a safe way to send payments.
>
> I do not believe that consumers and business are against paying fees, even high fees. What is required is operational reliability.
>
> This great issue needs to be resolved for the safety and reliability of Bitcoin. The time to resolve issues in commerce is before they become great big issues. The time to resolve this issue is now. We must have the foresight to identify and resolve problems before they trip us over. Simply doubling block sizes every so often is reactionary and is not a reliable permanent solution. I have written a BIP proposal for a technical solution but, need your help to write it up to an acceptable standard to be a full BIP.
>
> I have formatted the following with markdown which is human readable so, I hope nobody minds. I have done as much with this proposal as I feel that I am able so far but continue to take your feedback.
>
> # BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks
>
> ## The problem:
> Everybody wants value. Miners want to maximize revenue from fees (and we presume, to minimize block size). Consumers need transaction reliability and, (we presume) want low fees.
>
> The current transaction bandwidth limit is a limiting factor for both. As the operational safety of transactions is limited, so is consumer confidence as they realize the issue and, accordingly, uptake is limited. Fees are artificially inflated due to bandwidth limitations while failing to provide a full confirmation service for all transactions.
>
> Current fee recommendations provide no satisfaction for transaction reliability and, as Bitcoin scales, this will worsen.
>
> Bitcoin must be a fully scalable and reliable service, providing full transaction confirmation for every valid transaction.
>
> The possibility to send a transaction with a fee lower than one that is acceptable to allow eventual transaction confirmation should be removed from the protocol and also from the user interface.
>
> ## Solution summary:
> Provide each transaction with an individual transaction priority each time before choosing transactions to include in the current block, the priority being a function of the fee paid (on a curve), and the time waiting in the transaction pool (also on a curve) out to n days (n=60 ?). The transaction priority to serve as the likelihood of a transaction being included in the current block, and for determining the order in which transactions are tried to see if they will be included.
>
> Use a target block size. Determine the target block size using; current transaction pool size x ( 1 / (144 x n days ) ) = number of transactions to be included in the current block. Broadcast the next target block size with the current block when it is solved so that nodes know the next target block size for the block that they are building on.
>
> The curves used for the priority of transactions would have to be appropriate. Perhaps a mathematician with experience in probability can develop the right formulae. My thinking is a steep curve. I suppose that the probability of all transactions should probably account for a sufficient number of inclusions that the target block size is met although, it may not always be. As a suggestion, consider including some zero fee transactions to pad, highest BTC value first?
>
> **Explanation of the operation of priority:**
> > If transaction priority is, for example, a number between one (low) and one-hundred (high) it can be directly understood as the percentage chance in one-hundred of a transaction being included in the block. Using probability or likelihood infers that there is some function of random. If random (100) < transaction priority then the transaction is included.
>
> >To break it down further, if both the fee on a curve value and the time waiting on a curve value are each a number between one and one-hundred, a rudimentary method may be to simply multiply those two numbers, to find the priority number. For example, a middle fee transaction waiting thirty days (if n = 60 days) may have a value of five for each part (yes, just five, the values are on a curve). When multiplied that will give a priority value of twenty-five, or, a twenty-five percent chance at that moment of being included in the block; it will likely be included in one of the next four blocks, getting more likely each chance. If it is still not included then the value of time waiting will be higher, making for more probability. A very low fee transaction would have a value for the fee of one. It would not be until near sixty-days that the particular low fee transaction has a high likelihood of being included in the block.
>
> I am not concerned with low (or high) transaction fees, the primary reason for addressing the issue is to ensure transactional reliability and scalability while having each transaction confirm in due time.
>
> ## Pros:
> * Maximizes transaction reliability.
> * Fully scalable.
> * Maximizes possibility for consumer and business uptake.
> * Maximizes total fees paid per block without reducing reliability; because of reliability, in time confidence and overall uptake are greater; therefore, more transactions.
> * Market determines fee paid for transaction priority.
> * Fee recommendations work all the way out to 30 days or greater.
> * Provides additional block entropy; greater security since there is less probability of predicting the next block.
>
> ## Cons:
> * Could initially lower total transaction fees per block.
> * Must be first be programmed.
>
> ## Solution operation:
> This is a simplistic view of the operation. The actual operation will need to be determined in a spec for the programmer.
>
> 1. Determine the target block size for the current block.
> 2. Assign a transaction priority to each transaction in the pool.
> 3. Select transactions to include in the current block using probability in transaction priority order until the target block size is met.
> 5. Solve block.
> 6. Broadcast the next target block size with the current block when it is solved.
> 7. Block is received.
> 8. Block verification process.
> 9. Accept/reject block based on verification result.
> 10. Repeat.
>
> ## Closing comments:
> It may be possible to verify blocks conform to the proposal by showing that the probability for all transactions included in the block statistically conforms to a probability distribution curve, *if* the individual transaction priority can be recreated. I am not that deep into the mathematics; however, it may also be possible to use a similar method to do this just based on the fee, that statistically, the blocks conform to a fee distribution. Any zero fee transactions would have to be ignored. This solution needs a clever mathematician.
>
> I implore, at the very least, that we use some method that validates full transaction reliability and enables scalability of block sizes. If not this proposal, an alternative.
>
> Regards,
> Damian Williamson
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Damian Williamson via bitcoin-dev
2017-12-26 05:14:14 UTC
Permalink
I have needed to re-tac my intentions somewhat, there is still much
work to be done.

This is a request for assistance and further discussion of the re-
revised proposal. I am sure there are still issues to be resolved.

## BIP Proposal: UTPFOTIB - Use Transaction Priority For Ordering
Transactions In Blocks

Schema:  
##########  
Document: BIP Proposal  
Title: UTPFOTIB - Use Transaction Priority For Ordering Transactions In
Blocks  
Date: 26-12-2017  
Author: Damian Williamson &lt;***@live.com.au&gt;  
Licence: Creative Commons Attribution-ShareAlike 4.0 International
License.  
URL: http://thekingjameshrmh.tumblr.com/post/168948530950/bip-proposal-
utpfotib-use-transaction-priority-for-order  
##########  

### 1. Abstract

This document proposes to address the issue of transactional
reliability in Bitcoin, where valid transactions may be stuck in the
transaction pool for extended periods or never confirm.

There are two key issues to be resolved to achieve this:

1.  The current transaction bandwidth limit.
2.  The current ad-hoc methods of including transactions in blocks
resulting in variable and confusing confirmation times for valid
transactions, including transactions with a valid fee that may never
confirm.

It is important with any change to protect the value of fees as these
will eventually be the only payment that miners receive. Rather than an
auction model for limited bandwidth, the proposal results in a fee for
priority service auction model.

It would not be true to suggest that all feedback received so far has
been entirely positive although, most of it has been constructive.

The previous threads for this proposal are available here:  
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/s
ubject.html

In all parts of this proposal, references to a transaction, a valid
transaction, a transaction with a valid fee, a valid fee, etc. is
defined as any transaction that is otherwise valid with a fee of at
least 0.00001000 BTC/KB as defined as the dust level, interpreting from
Bitcoin Core GUI. Transactions with a fee lower than this rate are
considered dust.

In all parts of this proposal, dust and zero-fee transactions are
always ignored and/or excluded unless specifically mentioned.

It is generally assumed that miners currently prefer to include
transactions with higher fees.

### 2. The need for this proposal

We all must learn to admit that transaction bandwidth is still lurking
as a serious issue for the operation, reliability, safety, consumer
acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day
target confirmation from the fee recommendation. That transaction has
still not confirmed after now more than six days - even waiting twice
as long seems quite reasonable to me (note for accuracy: it did
eventually confirm). That transaction is a valid transaction; it is not
rubbish, junk or, spam. Under the current model with transaction
bandwidth limitation, the longer a transaction waits, the less likely
it is ever to confirm due to rising transaction numbers and being
pushed back by transactions with rising fees.

I argue that no transactions with fees above the dust level are rubbish
or junk, only some zero fee transactions might be spam. Having an ever-
increasing number of valid transactions that do not confirm as more new
transactions with higher fees are created is the opposite of operating
a robust, reliable transaction system.

Business cannot operate with a model where transactions may or may not
confirm. Even a business choosing a modest fee has no guarantee that
their valid transaction will not be shuffled down by new transactions
to the realm of never confirming after it is created. Consumers also
will not accept this model as Bitcoin expands. If Bitcoin cannot be a
reliable payment system for confirmed transactions then consumers, by
and large, will simply not accept the model once they understand.
Bitcoin will be a dirty payment system, and this will kill the value of
Bitcoin.

Under the current system, a minority of transactions will eventually be
the lucky few who have fees high enough to escape being pushed down the
list.

Once there are more than x transactions (transaction bandwidth limit)
every ten minutes, only those choosing twenty-minute confirmation (2
blocks) from the fee recommendations will have initially at most a
fifty percent chance of ever having their payment confirm when 2x
transactions is reached. Presently, not even using fee recommendations
can ensure a sufficiently high fee is paid to ensure transaction
confirmation.

I also argue that the current auction model for limited transaction
bandwidth is wrong, is not suitable for a reliable transaction system
and, is wrong for Bitcoin. All transactions with valid fees must
confirm in due time. Currently, Bitcoin is not a safe way to send
payments.

I do not believe that consumers and business are against paying fees,
even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of
Bitcoin. The time to resolve issues in commerce is before they become
great big issues. The time to resolve this issue is now. We must have
the foresight to identify and resolve problems before they trip us
over.  Simply doubling block sizes every so often is reactionary and is
not a reliable permanent solution.

I have written this proposal for a technical solution but, need your
help to write it up to an acceptable standard to be a full BIP.

### 3. The problem

Everybody wants value. Miners want to maximise revenue from fees (and
we presume, to minimise block size). Consumers need transaction
reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both.
As the operational safety of transactions is limited, so is consumer
confidence as they realise the issue and, accordingly, uptake is
limited. Fees are artificially inflated due to bandwidth limitations
while failing to provide a full confirmation service for all valid
transactions.

Current fee recommendations provide no satisfaction for transaction
reliability and, as Bitcoin scales, this will worsen.

Transactions are included in blocks by miners using whatever basis they
prefer. We expect that this is usually a fee-based priority. However,
even transactions with a valid fee may be left in the transaction pool
for some time. As transaction bandwidth becomes an issue, not even
extreme fees can ensure a transaction is processed in a timely manner
or at all.

Bitcoin must be a fully scalable and reliable service, providing full
transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is
acceptable to allow eventual transaction confirmation should be removed
from the protocol and also from the user interface.

### 4. Solution summary

#### Main solution

Provide each valid transaction in the mempool with an individual
transaction priority each time before choosing transactions to include
in the current block. The priority being a function of the fee (on a
curve), and the time waiting in the transaction pool (also on a curve)
out to n days (n = 60 days ?), and extending past n days. The value for
fee on a curve may need an upper limit. The transaction priority to
serve as the likelihood of a transaction being included in the current
block, and for determining the order in which transactions are tried to
see if they will be included.

Nodes will need to keep track of when a transaction is first seen. It
is satisfactory for each node to do this independently provided the
information survives node restart. If there is a more reliable way to
determine when a transaction was first seen on the network then it
should be utilised.

Use a dynamic target block size to make the current block. If the block
size is consistently too small then I expect ageing transactions will
be overrepresented as a portion of the block contents, to the point
where blocks will only contain the oldest transactions as they age past
n days. If block size is too large on average then this will shrink the
transaction pool. Determine the target block size using; pre-
rollout(current average valid transaction pool size) x ( 1 / (144 x n
days ) ) = number of transactions to be included in the current block.
The block created should be a minimum 1MB in size regardless if the
target block size is lower.

Nodes that have not yet adopted the proposal will just continue to
create 1MB unordered blocks.

The default value for mempoolexpiry may in future need to be adjusted
to match n days or, perhaps using less than n = 14 days may be a more
sensible approach?

All block created with dynamic size should be verified to ensure
conformity to a probability distribution curve resulting from the
priority method. Since the input is a probability, the output should
conform to a probability distribution.

The curves used for the priority of transactions would have to be
appropriate. Perhaps a mathematician with experience in probability can
develop the right formulae. My thinking is a steep curve. I suppose
that the probability of all transactions should probably account for a
sufficient number of inclusions that the target block size is met on
average although, it may not always be. As a suggestion, consider
including some dust or zero-fee transactions to pad if each valid
transaction is tried and the target block size is not yet met, highest
BTC transaction value first?

**Explanation of the operation of priority:**

> If transaction priority is, for example, a number between one (low)
and one-hundred (high) it can be directly understood as the percentage
chance in one-hundred of a transaction being included in the block.
Using probability or likelihood infers that there is some function of
random. Try the transactions in priority order from highest to lowest,
if random (100) < transaction priority then the transaction is included
until the target block size is met.

> To break it down further, if both the fee on a curve value and the
time waiting on a curve value are each a number between one and one-
hundred, a rudimentary method may be to simply multiply those two
numbers, to find the priority number. For example, a middle fee
transaction waiting thirty days (if n = 60 days) may have a value of
five for each part  (yes, just five, the values are on a curve). When
multiplied that will give a priority value of twenty-five, or, a
twenty-five percent chance at that moment of being included in the
block; it will likely be included in one of the next four blocks,
getting more likely each chance. If it is still not included then the
value of time waiting will be higher, making for more probability. A
very low fee transaction would have a value for the fee of one. It
would not be until near sixty-days that the particular low fee
transaction has a high likelihood of being included in the block.

In practice it may be more useful to use numbers representative of one-
hundred for the highest fee priority curve down to a small fraction of
one for the lowest fee and, from one for a newly seen transaction up to
a proportionately high number above one-hundred for the time waiting
curve. It is truely beyond my level of math to resolve probability
curves accurately without much trial and error.

The primary reason for addressing the issue is to ensure transactional
reliability and scalability while having each valid transaction confirm
in due time.

#### Pros

*   Maximizes transaction reliability.
*   Overcomes transaction bandwidth limit.
*   Fully scalable.
*   Maximizes possibility for consumer and business uptake.
*   Maximizes total fees paid per block without reducing reliability;
because of reliability, in time confidence and overall uptake are
greater; therefore, more transactions.
*   Market determines fee paid for transaction priority.
*   Fee recommendations work all the way out to 30 days or greater.
*   Provides additional block entropy; greater security since there is
less probability of predicting the next block. _Although this is not
necessary it is a product of the operation of this proposal._

#### Cons

*   Could initially lower total transaction fees per block.
*   Must be first be programmed.

#### Pre-rollout

Nodes need to have at a minimum a loose understanding of the average
(since there is no consensus) size of the transaction pool as a
requirement to enable future changes to the way blocks are constructed.

A new network service should be constructed to meet this need. This
service makes no changes to any existing operation or function of the
node. Initially, Bitcoin Core is a suitable candidate.

**The service must:**

*   Have an individual temporary (runtime permanent only) Serial Node
ID.
*   Accept communication of the number of valid transactions in the
mempool of another valid Bitcoin node along with the Serial Node ID of
the node whose value is provided.
*   Disconnect the service from any non-Bitcoin node. Bitcoin Core may
handle this already?
*   Expire any value not updated for k minutes (k = 30 minutes?).
*   Broadcast all mempool information the node has every m minutes (m =
10 minutes?), including its own.
*   Nodes own mempool information should not be broadcast or used in
calculation until the node has been up long enough for the mempool to
normalise for at least o minutes (o = 300 minutes ?)
*   Only new or updated mempool values should be transmitted to the
same node. Updated includes updated with no change.
*   All known mempool information must survive node restart.
*   If the nodes own mempool is not normalised and network information
is not available to calculate an average just display zero.
*   Internally, the average transaction pool size must return the
calculated average if an average is available or, if none is available
just the number of valid transactions in the node's own mempool
regardless if it is normalised.

Bitcoin Core must use all collated information on mempool size to
calculate a figure for the average mempool size.

The calculated figure should be displayed in the appropriate place in
the Debug window alongside the text Network average transactions.

Consideration must be given before development of the network bandwidth
this would require. All programming must be consistent with the current
operation and conventions of Bitcoin Core. Methods must work on all
platforms.

As this new service does not affect any existing service or feature of
Bitcoin or Bitcoin Core, this can technically be programmed now and
included in Bitcoin Core at any time.

### 5. Solution operation

This is a simplistic view of the operation. The actual operation will
need to be determined accurately in a spec for the programmer.

1.  Determine the target block size for the current block.
2.  Assign a transaction priority to each valid transaction in the
mempool.
3.  Select transactions to include in the current block using
probability in transaction priority order until the target block size
is met. If target block size is not met, include dust and zero-fee
transactions to pad.
4.  Solve block.
5.  Broadcast the current block when it is solved.
6.  Block is received.
7.  Block verification process.
8.  Accept/reject block based on verification result.
9.  Repeat.

### 6. Closing comments

It may be possible to verify blocks conform to the proposal by showing
that the probability for all transactions included in the block
statistically conforms to a probability distribution curve, *if* the
individual transaction priority can be recreated. I am not that deep
into the mathematics; however, it may also be possible to use a similar
method to do this just based on the fee, that statistically, the block
conforms to a fee distribution. Any dust and zero-fee transactions
would have to be ignored. This solution needs a competent mathematician
with experience in probability and statistical distribution.

There has been some concern expressed over spam and very low fee
transactions, and an infinite block size resulting. I hope that for
those concerned using the dust level addresses the issue, especially as
the value of Bitcoin grows.

This proposal is necessary. I implore, at the very least, that we use
some method that validates full transaction reliability and enables
scalability of Bitcoin. If not this proposal, an alternative.

I have done as much with this proposal as I feel that I am able so far
but continue to take your feedback.

Regards,  
Damian Williamson

[![Creative Commons License](https://i.creativecommons.org/l/by-sa/4.0/
88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/)  
<span xmlns:dct="http://purl.org/dc/terms/"
href="http://purl.org/dc/dcmitype/Text" property="dct:title"
rel="dct:type">BIP Proposal: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks</span> by [Damian Williamson
&lt;***@live.com.au&gt;](http://thekingjameshrmh.tumblr.com/post/1
68948530950/bip-proposal-utpfotib-use-transaction-priority-for-order)
is licensed under a [Creative Commons Attribution-ShareAlike 4.0
International License](http://creativecommons.org/licenses/by-sa/4.0/).
Based on a work at [https://lists.linuxfoundation.org/pipermail/bitcoin
-dev/2017-
December/015371.html](https://lists.linuxfoundation.org/pipermail/bitco
in-dev/2017-December/015371.html).
Permissions beyond the scope of this license may be available at [https
://opensource.org/licenses/BSD-3-
Clause](https://opensource.org/licenses/BSD-3-Clause).
ZmnSCPxj via bitcoin-dev
2017-12-27 03:55:43 UTC
Permalink
Good morning Damian,

I see you have modified your proposal to be purely driven by miners, with fullnodes not actually being able to create a strict "yes-or-no" answer as to block validity under your rules. This implies that your rules cannot be enforced and that rational miners will ignore your proposal unless it brings in more money for them. The fact that your proposal provides some mechanism to increase block size means that miners will be incentivized to falsify data (by making up their own transactions just above your fixed "dust size" threshhold whatever that threshhold may be -- and remember, miners get at least 12.5 BTC per block, so they can make a lot of little falsified transactions to justify every block size increase) until the block size increase per block is the maximum possible block size increase.

--

Let me then explain proof-of-work and the arrow of time in Physics. It may seem a digression, but please, bear with me.

Proof-of-work proves that work was performed, and (crucially) that this work was done in the past.

This is important because of the arrow of time.

In principle, every physical interaction is reversible. Visualize a video of two indivisible particles. The two particles move towards each other, collide, and because of the collision, fly apart. If you ran this video in reverse, or in forward, it would not be distinguishable to you, as an outside observer, whether the video was running in reverse or not. It seems at some level, time does not exist.

And yet time exists.

Consider another video, that of a vase being dropped on a hard surface. The vase hits the surface and shatters. Played in reverse, we can judge it as nonsensical: scattered pieces of ceramic spontaneously forming a vase and then flying upwards. This orients our arrow of time: the arrow of time points from states of the universe where lesser entropy exists (the vase is whole) to where greater entropy exists (the vase is in many pieces).

Indeed, all measures of time are, directly or indirectly, measures of increases in entropy. Consider a simple hourglass: you place it into a state of low entropy and high energy with most of the sand is in the upper part of the hourglass. As sand falls, and more of that energy is lost into entropy, you judge that time passes.

Consider a proof-of-work algorithm: you place electrons into a state of low entropy and high energy. As electrons go through the mining hardware, producing hashes that pass the difficulty requirement, the energy in those electrons is lost into entropy (heat), and from the hashes produced (which proves not only that work was done, but in particular, that entropy increased due to work being done), you judge that time passes.

--

Thus, the blockchain itself is already a service that provides a measure of time. When a block commits to a transaction, then that transaction is known to have existed at that block height, at the latest.

Thus one idea, is to have each block commit to some view of the mempool. If a transaction exists in this mempool-view, then you know that the transaction is at least that old, and can judge the age from this and use this to compute the "transaction priority".

Unfortunately, transferring the data to prove that the mempool-view is valid, is equivalent to always sweeping the entire mempool contents per block. In that case you might as well not have a block size limit.

In addition, miners may still commit to a falsely-empty mempool and deny that your transaction is old and therefore priority and therefore will simply fill their blocks with transactions that have high feerates rather than high priority. Thus feerate will still be the ultimate measure.

Rather than attempt this, perhaps developers should be encouraged to make use of existing mechanisms, RBF and CPFP, to allow transactions to be sped up by directly manipulating feerates, as priority (by your measure) is not practically computable.

Regards,
ZmnSCPxj
Damian Williamson via bitcoin-dev
2017-12-27 12:29:41 UTC
Permalink
Good evening ZmnSCPxj,


That you for your considered discussion.


Am I wrong to think that any fullnode can validate blocks conform to a probability distribution? In my understanding after adoption of the proposal, any full node could validate all properties that a block has that they now validate, apart from block size, and additionally that the block conforms to a probability distribution. It seems a yes-no result. Let us assume that such a probability distribution exists since the input is a probability.

Before or after the proposal, miners could falsify transactions if there is a feasible way for them to do this. The introduction of the proposal does not change that fact. At the moment the incentive to falsify transactions is to fill blocks so that real transactions must pay the highest possible fees in the auction for limited transaction bandwidth resulting in a net gain for miners. Simply making bigger blocks serves no economic purpose in itself, since the miners we presume must pay the fees for their falsified transactions, there is no net gain, the fee will be distributed through the pool. Unless, by miners, I may presume we mostly mean mining pools and collusion. Still, where is the gain? It is only the blocks that will be larger with no economic advantage.

In a fee for priority service auction, there is always limited space in each new block since it represents only a small fraction of the size of the mempool. Presenting fraudulent transactions at the bottom end of the scale has limited effect on the cost of being near the front of the queue, at priority. As the fraudulent transactions age they would be included in blocks presuming the fee is above dust level, but the block size would grow to accommodate them since the valid mempool is larger. The auction for priority still continues uninterrupted at the top of the priority curve. There is nothing stopping a motivated individual now from writing a script to create a million pointless dust transactions per day, flooding the mempool. Even if the fee is above dust level the proposal does not change this but, ensures transactional reliability for valid transactions.

In an idealist world, all nodes could agree on the state of the mempool. I agree, there is no feasible way currently to hold the mempool to consensus without a network of dedicated mempool servers. As it is, it has been suggested that all long-running nodes will have approximately a similar view of the mempool. Sweeping the entire mempool contents per block would achieve what is required if there was a mempool consensus but since it will just be one node's view of the mempool that will not be the result.

My speculation is that as a result of the proposal, through increased adoption of Bitcoin over time there would, in fact, be more transactions and greater net fees paid per day. An increased value of BTC that we suppose would follow from increased usage would augment this fee value increase. It surely follows that a more stable and reliable service will have greater consumer and business acceptance, and there it follows that this is in miners financial interest.

I have not considered a maxblocksize since I consider that the mempool can eventually grow infinitely in size just in valid transactions, without even any fraudulent transactions. I suppose that in time it will become necessary to start all new nodes in pruned mode by default due to the onerous storage requirements of the full blockchain. I do not think that the proposed changes alter this.

I am sure that there is much more to write.

Regards,
Damian Williamson



________________________________
From: ZmnSCPxj <***@protonmail.com>
Sent: Wednesday, 27 December 2017 2:55 PM
To: Damian Williamson
Cc: bitcoin-***@lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Good morning Damian,

I see you have modified your proposal to be purely driven by miners, with fullnodes not actually being able to create a strict "yes-or-no" answer as to block validity under your rules. This implies that your rules cannot be enforced and that rational miners will ignore your proposal unless it brings in more money for them. The fact that your proposal provides some mechanism to increase block size means that miners will be incentivized to falsify data (by making up their own transactions just above your fixed "dust size" threshhold whatever that threshhold may be -- and remember, miners get at least 12.5 BTC per block, so they can make a lot of little falsified transactions to justify every block size increase) until the block size increase per block is the maximum possible block size increase.



--

Let me then explain proof-of-work and the arrow of time in Physics. It may seem a digression, but please, bear with me.

Proof-of-work proves that work was performed, and (crucially) that this work was done in the past.

This is important because of the arrow of time.

In principle, every physical interaction is reversible. Visualize a video of two indivisible particles. The two particles move towards each other, collide, and because of the collision, fly apart. If you ran this video in reverse, or in forward, it would not be distinguishable to you, as an outside observer, whether the video was running in reverse or not. It seems at some level, time does not exist.

And yet time exists.

Consider another video, that of a vase being dropped on a hard surface. The vase hits the surface and shatters. Played in reverse, we can judge it as nonsensical: scattered pieces of ceramic spontaneously forming a vase and then flying upwards. This orients our arrow of time: the arrow of time points from states of the universe where lesser entropy exists (the vase is whole) to where greater entropy exists (the vase is in many pieces).

Indeed, all measures of time are, directly or indirectly, measures of increases in entropy. Consider a simple hourglass: you place it into a state of low entropy and high energy with most of the sand is in the upper part of the hourglass. As sand falls, and more of that energy is lost into entropy, you judge that time passes.

Consider a proof-of-work algorithm: you place electrons into a state of low entropy and high energy. As electrons go through the mining hardware, producing hashes that pass the difficulty requirement, the energy in those electrons is lost into entropy (heat), and from the hashes produced (which proves not only that work was done, but in particular, that entropy increased due to work being done), you judge that time passes.

--

Thus, the blockchain itself is already a service that provides a measure of time. When a block commits to a transaction, then that transaction is known to have existed at that block height, at the latest.

Thus one idea, is to have each block commit to some view of the mempool. If a transaction exists in this mempool-view, then you know that the transaction is at least that old, and can judge the age from this and use this to compute the "transaction priority".

Unfortunately, transferring the data to prove that the mempool-view is valid, is equivalent to always sweeping the entire mempool contents per block. In that case you might as well not have a block size limit.

In addition, miners may still commit to a falsely-empty mempool and deny that your transaction is old and therefore priority and therefore will simply fill their blocks with transactions that have high feerates rather than high priority. Thus feerate will still be the ultimate measure.

Rather than attempt this, perhaps developers should be encouraged to make use of existing mechanisms, RBF and CPFP, to allow transactions to be sped up by directly manipulating feerates, as priority (by your measure) is not practically computable.

Regards,
ZmnSCPxj
Damian Williamson via bitcoin-dev
2018-01-01 11:04:57 UTC
Permalink
Happy New Year all.

This proposal has been further amended with several minor changes and a
few additions.

I believe that all known issues raised so far have been sufficiently
addressed. Either that or, I still have more work to do.

## BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks

Schema:  
##########  
Document: BIP Proposal  
Title: UTPFOTIB - Use Transaction Priority For Ordering Transactions In
Blocks  
Published: 26-12-2017  
Revised: 01-01-2018  
Author: Damian Williamson <***@live.com.au>  
Licence: Creative Commons Attribution-ShareAlike 4.0 International
License.  
URL: http://thekingjameshrmh.tumblr.com/post/168948530950/bip-proposal-
utpfotib-use-transaction-priority-for-order  
##########

### 1. Abstract

This document proposes to address the issue of transactional
reliability in Bitcoin, where valid transactions may be stuck in the
transaction pool for extended periods or never confirm.

There are two key issues to be resolved to achieve this:

1.  The current transaction bandwidth limit.
2.  The current ad-hoc methods of including transactions in blocks
resulting in variable and confusing confirmation times for valid
transactions, including transactions with a valid fee that may never
confirm.

It is important with any change to protect the value of fees as these
will eventually be the only payment that miners receive. Rather than an
auction model for limited bandwidth, the proposal results in a fee for
priority service auction model.

It would not be true to suggest that all feedback received so far has
been entirely positive although, most of it has been constructive.

The previous threads for this proposal are available here:  
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/s
ubject.html

In all parts of this proposal, references to a transaction, a valid
transaction, a transaction with a valid fee, a valid fee, etc. is
defined as any transaction that is otherwise valid with a fee of at
least 0.00001000 BTC/KB as defined as the dust level, interpreting from
Bitcoin Core GUI. Transactions with a fee lower than this rate are
considered dust.

In all parts of this proposal, dust and zero-fee transactions are
always ignored and/or excluded unless specifically mentioned.

It is generally assumed that miners currently prefer to include
transactions with higher fees.

### 2. The need for this proposal

We all must learn to admit that transaction bandwidth is still lurking
as a serious issue for the operation, reliability, safety, consumer
acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day
target confirmation from the fee recommendation. That transaction has
still not confirmed after now more than six days - even waiting twice
as long seems quite reasonable to me (note for accuracy: it did
eventually confirm). That transaction is a valid transaction; it is not
rubbish, junk or, spam. Under the current model with transaction
bandwidth limitation, the longer a transaction waits, the less likely
it is ever to confirm due to rising transaction numbers and being
pushed back by transactions with rising fees.

I argue that no transactions with fees above the dust level are rubbish
or junk, only some zero fee transactions might be spam. Having an ever-
increasing number of valid transactions that do not confirm as more new
transactions with higher fees are created is the opposite of operating
a robust, reliable transaction system.

While the miners have discovered a gold mine, it is the service they
provide that is valuable. If the service is unreliable they are not
worth the gold that they mine. This is reflected in the value of
Bitcoin.

Business cannot operate with a model where transactions may or may not
confirm. Even a business choosing a modest fee has no guarantee that
their valid transaction will not be shuffled down by new transactions
to the realm of never confirming after it is created. Consumers also
will not accept this model as Bitcoin expands. If Bitcoin cannot be a
reliable payment system for confirmed transactions then consumers, by
and large, will simply not accept the model once they understand.
Bitcoin will be a dirty payment system, and this will kill the value of
Bitcoin.

Under the current system, a minority of transactions will eventually be
the lucky few who have fees high enough to escape being pushed down the
list.

Once there are more than x transactions (transaction bandwidth limit)
every ten minutes, only those choosing twenty-minute confirmation (2
blocks) from the fee recommendations will have initially at most a
fifty percent chance of ever having their payment confirm by the time
2x transactions is reached. Presently, not even using fee
recommendations can ensure a sufficiently high fee is paid to ensure
transaction confirmation.

I also argue that the current auction model for limited transaction
bandwidth is wrong, is not suitable for a reliable transaction system
and, is wrong for Bitcoin. All transactions with valid fees must
confirm in due time. Currently, Bitcoin is not a safe way to send
payments.

I do not believe that consumers and business are against paying fees,
even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of
Bitcoin. The time to resolve issues in commerce is before they become
great big issues. The time to resolve this issue is now. We must have
the foresight to identify and resolve problems before they trip us
over.  Simply doubling block sizes every so often is reactionary and is
not a reliable permanent solution.

I have written this proposal for a technical solution but, need your
help to write it up to an acceptable standard to be a full BIP.

### 3. The problem

Everybody wants value. Miners want to maximise revenue from fees (and
we presume, to minimise block size). Consumers need transaction
reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both.
As the operational safety of transactions is limited, so is consumer
confidence as they realise the issue and, accordingly, uptake is
limited. Fees are artificially inflated due to bandwidth limitations
while failing to provide a full confirmation service for all valid
transactions.

Current fee recommendations provide no satisfaction for transaction
reliability and, as Bitcoin scales, this will worsen.

Transactions are included in blocks by miners using whatever basis they
prefer. We expect that this is usually a fee-based priority. However,
even transactions with a valid fee may be left in the transaction pool
for some time. As transaction bandwidth becomes an issue, not even
extreme fees can ensure a transaction is processed in a timely manner
or at all.

Bitcoin must be a fully scalable and reliable service, providing full
transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is
acceptable to allow eventual transaction confirmation should be removed
from the protocol and also from the user interface.

Bitcoin should be capable of reliably and inexpensively processing
casual transactions, and also priority processing of fee paying at
auction for priority transactions in the shortest possible timeframe.

### 4. Solution summary

#### Main solution

Provide each valid transaction in the mempool with an individual
transaction priority each time before choosing transactions to include
in the current block. The priority being a function of the fee (on a
curve), and the time waiting in the transaction pool (also on a curve)
out to n days (n = 60 days ?), and extending past n days. The value for
fee on a curve may need an upper limit. The transaction priority to
serve as the likelihood of a transaction being included in the current
block, and for determining the order in which transactions are tried to
see if they will be included.

Nodes will need to keep track of when a transaction is first seen. It
is satisfactory for each node to do this independently provided the
full mempool and information survives node restart. If there is a more
reliable way to determine when a transaction was first seen on the
network then it should be utilised.

> My current default installation of Bitcoin Core v0.15.1 does not
currently seem to save and load the mempool on restart, despite the
notes in the command line options panel that the default for
persistmempool is 1. In the debug panel, some 90,000 transactions
before restart, some 200 odd shortly after. Manually setting
persistmempool=1 in the conf file does not seem to make any difference.
Perhaps it is operating as expected and I am not sure what to observe,
but does not seem to be observably saving and loading the mempool on
restart. This will need to be resolved.

Use a dynamic target block size to make the current block. This marks a
shift from using block size or weight to a count of transactions.
Determine the target block size using; pre-rollout(current average
valid transaction pool size) x ( 1 / (144 x n days ) ) = number of
transactions to be included in the current block. The block created
should be a minimum 1MB in size regardless if the target block size is
lower.

If the created block size consistently contains too few transactions
and the number of new transactions created is continuously greater than
the block size will accommodate then I expect eventually ageing
transactions will be over-represented as a portion of the block
contents. Once another new node conforming to the proposal makes a
block, the block size will be proportionately larger as the transaction
pool has grown.  If block size is too large on average then this will
shrink the transaction pool.

Miners will likely want to conform to the proposal, since making blocks
larger than necessary makes more room in each block potentially
lowering the highest fees paid for priority service. Always making
blocks smaller than the proposal requires will in time lower the
utility value of Bitcoin, a different situation but akin to the
current. Transactions will still always confirm but with longer and
longer wait periods. The auction at the front of the queue for priority
will be destroyed as there will be eventually no room in blocks besides
ageing transations and, there will be little value paying higher than
the minimum fee. Obviously, neither of these scenarios are in a miner's
interests.

Without a consensus as to what size dynamic block to create,
enforcement of dynamic block size is not currently possible. It may be
possible for a consensus to be formed in the future but here I cannot
speculate. I can only suggest that it is in the interest of Bitcoin as
a whole and, in the interest of each node to conform to the proposal.
Some nodes failing to conform to the proposed requirements of dynamic
size or transaction priority in this proposal will not be destructive
to the operation of the proposal.

If necessary, nodes that have not yet adopted the proposal will just
continue to create standard fixed size unordered blocks, although, if
the current mechanisms of block validation include the fixed block size
then it is unlikely that these nodes will be able to validate the
blockchain going forward. In this case a hard fork and a full transfer
to the new method should be required. If dynamic blocks with ordered
transactions will be valid to existing nodes then only a soft fork is
required. There is no proposed change to the internal construction of
blocks, only to the block size and using an ordered method of
transaction selection.

> The default value for mempoolexpiry in Bitcoin Core may in future
need to be adjusted to match something more than n days or, perhaps
using less than n = 14 days may be a more sensible approach?

All block created with dynamic size should be verified to ensure
conformity to a probability distribution curve resulting from the
priority method. Since the input is a probability, the output should
conform to a probability distribution.

The curves used for the priority of transactions would have to be
appropriate. Perhaps a mathematician with experience in probability can
develop the right formulae. My thinking is a steep curve. I suppose
that the probability of all transactions should probably account for a
sufficient number of inclusions that the target block size is met on
average although, it may not always be. As a suggestion, consider
including some dust or zero-fee transactions to pad if each valid
transaction is tried and the target block size is not yet met, highest
BTC transaction value first?

**Explanation of the operation of priority:**

> If transaction priority is, for example, a number between one (low)
and one-hundred (high) it can be directly understood as the percentage
chance in one-hundred of a transaction being included in the block.
Using probability or likelihood infers that there is some function of
random. Try the transactions in priority order from highest to lowest,
if random (100) < transaction priority then the transaction is included
until the target block size is met. 

> To break it down further, if both the fee on a curve value and the
time waiting on a curve value are each a number between one and one-
hundred, a rudimentary method may be to simply multiply those two
numbers, to find the priority number. For example, a middle fee
transaction waiting thirty days (if n = 60 days) may have a value of
five for each part  (yes, just five, the values are on a curve). When
multiplied that will give a priority value of twenty-five, or, a
twenty-five percent chance at that moment of being included in the
block; it will likely be included in one of the next four blocks,
getting more likely each chance. If it is still not included then the
value of time waiting will be higher, making for more probability. A
very low fee transaction would have a value for the fee of one. It
would not be until near sixty-days that the particular low fee
transaction has a high likelihood of being included in the block.

In practice it may be more useful to use numbers representative of one-
hundred for the highest fee priority curve down to a small fraction of
one for the lowest fee and, from one for a newly seen transaction up to
a proportionately high number above one-hundred for the time waiting
curve. It is truely beyond my level of math to resolve probability
curves accurately without much trial and error.

The primary reason for addressing the issue is to ensure transactional
reliability and scalability while having each valid transaction confirm
in due time.

#### Pros

*   Maximizes transaction reliability.
*   Overcomes transaction bandwidth limit.
*   Fully scalable.
*   Maximizes possibility for consumer and business uptake.
*   Maximizes total fees paid per block without reducing reliability;
because of reliability, in time confidence and overall uptake are
greater; therefore, more transactions.
*   Market determines fee paid for transaction priority.
*   Fee recommendations work all the way out to 30 days or greater.
*   Provides additional block entropy; greater security since there is
less probability of predicting the next block. _Although this is not
necessary it is a product of the operation of this proposal._

#### Cons

*   Could initially lower total transaction fees per block.
*   Must be first be programmed.

#### Pre-rollout

Nodes need to have at a minimum a loose understanding of the average
(since there is no consensus) size of the transaction pool as a
requirement to enable future changes to the way blocks are constructed.

A new network service should be constructed to meet this need. This
service makes no changes to any existing operation or function of the
node. Initially, Bitcoin Core is a suitable candidate.

For all operations we count only valid transactions.

**The service must:**

*   Have an individual temporary (runtime permanent only) Serial Node
ID.
*   Accept communication of the number of valid transactions in the
mempool of another valid Bitcoin node along with the Serial Node ID of
the node whose value is provided.
*   Disconnect the service from any non-Bitcoin node. Bitcoin Core may
handle this already?
*   Expire any value not updated for k minutes (k = 30 minutes?).
*   Broadcast all mempool information the node has every m minutes (m =
10 minutes?), including its own.
*   Nodes own mempool information should not be broadcast or used in
calculation until the node has been up long enough for the mempool to
normalise for at least o minutes (o = 300 minutes ?)
*   Alternatively, if loading nodes own full mempool from disk on node
restart (o = 30 minutes ?)
*   Only new or updated mempool values should be transmitted to the
same node. Updated includes updated with no change.
*   All known mempool information must survive node restart.
*   If the nodes own mempool is not normalised and network information
is not available to calculate an average just display zero.
*   Internally, the average transaction pool size must return the
calculated average if an average is available or, if none is available
just the number of valid transactions in the node's own mempool
regardless if it is normalised.

Bitcoin Core must use all collated information on mempool size to
calculate a figure for the average mempool size.

The calculated figure should be displayed in the appropriate place in
the Debug window alongside the text Network average transactions.

Consideration must be given before development of the network bandwidth
this would require. All programming must be consistent with the current
operation and conventions of Bitcoin Core. Methods must work on all
platforms.

As this new service does not affect any existing service or feature of
Bitcoin or Bitcoin Core, this can technically be programmed now and
included in Bitcoin Core at any time.

### 5. Solution operation

This is a simplistic view of the operation. The actual operation will
need to be determined accurately in a spec for the programmer.

1.  Determine the target block size for the current block.
2.  Assign a transaction priority to each valid transaction in the
mempool.
3.  Select transactions to include in the current block using
probability in transaction priority order until the target block size
is met. If target block size is not met, include dust and zero-fee
transactions to pad.
4.  Solve block.
5.  Broadcast the current block when it is solved.
6.  Block is received.
7.  Block verification process.
8.  Accept/reject block based on verification result.
9.  Repeat.

### 6. Closing comments

It may be possible to verify blocks conform to the proposal by showing
that the probability for all transactions included in the block
statistically conforms to a probability distribution curve, *if* the
individual transaction priority can be recreated. I am not that deep
into the mathematics; however, it may also be possible to use a similar
method to do this just based on the fee, that statistically, the block
conforms to a fee distribution. Any dust and zero-fee transactions
would have to be ignored. This solution needs a competent mathematician
with experience in probability and statistical distribution.

It is trivial to this proposal to offer that a node provides the next
block size with a block when it is solved. I am not sure that this
creates any actual benefit since the provided next block size is only
one node's view, as it is the node may seemingly just as well use its
own view and create the block. Providing a next block size only adds
additional complexity to the required operation, however, perhaps
providing the next block size is not trivial in what is accomplished
and the feature can be included in the operation.

Instead of the pre-rollout network service providing data as to valid
transactions in mempool, it could directly provide data as to the
suggested next block size if that is preferred, using a similar
operation as is suggested now and averaging all received suggested next
block sizes.

It may be foreseeable in the future for Bitcoin to operate with a
network of dedicated full blockchain & mempool servers. This would not
be without challenges to overcome but would offer several benefits,
including to the operation of this proposal, and especially as the RAM
and storage requirements of a full node grows. It is easy to foresee
that in just another seven years of operation a Bitcoin Full Node will
require at least 300GB of storage and, if the mempool only doubles in
size, over 1GB of RAM.

There has been some concern expressed over spam and very low fee
transactions, and an infinite block size resulting. I hope that for
those concerned using the dust level addresses the issue, especially as
the value of Bitcoin grows.

Notwithstanding this proposal, all blocks including those with dynamic
size each have limited transaction space per block. This proposal
results in a fee for priority service auction, where the probability of
a transaction to be included in limited space in the next available
block is auctioned to the highest bidders and all other transactions
must wait until they reach priority by ageing to gain significant
probability. Under this proposal the mempool can grow quite large while
the confirmation service continues in a stable and reliable manner.
Several incentives for attackers are removed, where there is no longer
multiple potential incentives for unnecessarily filling blocks or
flooding the mempool with transactions, whether such transactions are
fraudulent, valid or, otherwise. Adoption of this proposal and
adherence results in a reliable, stable fee paying transaction
confirmation service and a beneficial auction.

This proposal is necessary. I implore, at the very least, that we use
some method that validates full transaction reliability and enables
scalability of Bitcoin. If not this proposal, an alternative.

I have done as much with this proposal as I feel that I am able so far
but continue to take your feedback.

Regards,  
Damian Williamson

[![Creative Commons License](https://i.creativecommons.org/l/by-sa/4.0/
88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/)  
<span xmlns:dct="http://purl.org/dc/terms/"
href="http://purl.org/dc/dcmitype/Text" property="dct:title"
rel="dct:type">BIP Proposal: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks</span> by [Damian Williamson
&lt;***@live.com.au&gt;](http://thekingjameshrmh.tumblr.com/post/1
68948530950/bip-proposal-utpfotib-use-transaction-priority-for-order)
is licensed under a [Creative Commons Attribution-ShareAlike 4.0
International License](http://creativecommons.org/licenses/by-sa/4.0/).
Based on a work at https://lists.linuxfoundation.org/pipermail/bitcoin-
dev/2017-
December/015371.html](https://lists.linuxfoundation.org/pipermail/bitco
in-dev/2017-December/015371.html).
Permissions beyond the scope of this license may be available at [https
://opensource.org/licenses/BSD-3-
Clause](https://opensource.org/licenses/BSD-3-Clause).
Damian Williamson via bitcoin-dev
2018-01-04 09:01:10 UTC
Permalink
This proposal has a new update, mostly minor edits. Additionally, I had a logic flaw in the hard fork / soft fork declaration statement. The specific terms of the CC-BY-SA-4.0 licence the document is published under have now been updated to include additional permissions available under the MIT licence.


Recently, on Twitter:

I am looking for a capable analyst/programmer to work on a BIP proposal as co-author. Will need to format several Full BIP's per these BIP process requirements: ( https://github.com/bitcoin/bips/blob/master/bip-0002.mediawiki ) from a BIP Proposal, being two initially for non-consensus full-interoperable pre-rollout on peer service layer & API/RPC layer and, a reference implementation for Bitcoin Core per: ( https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md ). Interested parties please reply via this list thread: ( https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015485.html ) #Bitcoin #BIP


Regards,

Damian Williamson


________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Monday, 1 January 2018 10:04 PM
To: bitcoin-***@lists.linuxfoundation.org
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Happy New Year all.

This proposal has been further amended with several minor changes and a
few additions.

I believe that all known issues raised so far have been sufficiently
addressed. Either that or, I still have more work to do.

## BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks

Schema:
##########
Document: BIP Proposal
Title: UTPFOTIB - Use Transaction Priority For Ordering Transactions In
Blocks
Published: 26-12-2017
Revised: 01-01-2018
Author: Damian Williamson <***@live.com.au>
Licence: Creative Commons Attribution-ShareAlike 4.0 International
License.
URL: http://thekingjameshrmh.tumblr.com/post/168948530950/bip-proposal-
utpfotib-use-transaction-priority-for-order
##########

### 1. Abstract

This document proposes to address the issue of transactional
reliability in Bitcoin, where valid transactions may be stuck in the
transaction pool for extended periods or never confirm.

There are two key issues to be resolved to achieve this:

1. The current transaction bandwidth limit.
2. The current ad-hoc methods of including transactions in blocks
resulting in variable and confusing confirmation times for valid
transactions, including transactions with a valid fee that may never
confirm.

It is important with any change to protect the value of fees as these
will eventually be the only payment that miners receive. Rather than an
auction model for limited bandwidth, the proposal results in a fee for
priority service auction model.

It would not be true to suggest that all feedback received so far has
been entirely positive although, most of it has been constructive.

The previous threads for this proposal are available here:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/s
ubject.html

In all parts of this proposal, references to a transaction, a valid
transaction, a transaction with a valid fee, a valid fee, etc. is
defined as any transaction that is otherwise valid with a fee of at
least 0.00001000 BTC/KB as defined as the dust level, interpreting from
Bitcoin Core GUI. Transactions with a fee lower than this rate are
considered dust.

In all parts of this proposal, dust and zero-fee transactions are
always ignored and/or excluded unless specifically mentioned.

It is generally assumed that miners currently prefer to include
transactions with higher fees.

### 2. The need for this proposal

We all must learn to admit that transaction bandwidth is still lurking
as a serious issue for the operation, reliability, safety, consumer
acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day
target confirmation from the fee recommendation. That transaction has
still not confirmed after now more than six days - even waiting twice
as long seems quite reasonable to me (note for accuracy: it did
eventually confirm). That transaction is a valid transaction; it is not
rubbish, junk or, spam. Under the current model with transaction
bandwidth limitation, the longer a transaction waits, the less likely
it is ever to confirm due to rising transaction numbers and being
pushed back by transactions with rising fees.

I argue that no transactions with fees above the dust level are rubbish
or junk, only some zero fee transactions might be spam. Having an ever-
increasing number of valid transactions that do not confirm as more new
transactions with higher fees are created is the opposite of operating
a robust, reliable transaction system.

While the miners have discovered a gold mine, it is the service they
provide that is valuable. If the service is unreliable they are not
worth the gold that they mine. This is reflected in the value of
Bitcoin.

Business cannot operate with a model where transactions may or may not
confirm. Even a business choosing a modest fee has no guarantee that
their valid transaction will not be shuffled down by new transactions
to the realm of never confirming after it is created. Consumers also
will not accept this model as Bitcoin expands. If Bitcoin cannot be a
reliable payment system for confirmed transactions then consumers, by
and large, will simply not accept the model once they understand.
Bitcoin will be a dirty payment system, and this will kill the value of
Bitcoin.

Under the current system, a minority of transactions will eventually be
the lucky few who have fees high enough to escape being pushed down the
list.

Once there are more than x transactions (transaction bandwidth limit)
every ten minutes, only those choosing twenty-minute confirmation (2
blocks) from the fee recommendations will have initially at most a
fifty percent chance of ever having their payment confirm by the time
2x transactions is reached. Presently, not even using fee
recommendations can ensure a sufficiently high fee is paid to ensure
transaction confirmation.

I also argue that the current auction model for limited transaction
bandwidth is wrong, is not suitable for a reliable transaction system
and, is wrong for Bitcoin. All transactions with valid fees must
confirm in due time. Currently, Bitcoin is not a safe way to send
payments.

I do not believe that consumers and business are against paying fees,
even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of
Bitcoin. The time to resolve issues in commerce is before they become
great big issues. The time to resolve this issue is now. We must have
the foresight to identify and resolve problems before they trip us
over. Simply doubling block sizes every so often is reactionary and is
not a reliable permanent solution.

I have written this proposal for a technical solution but, need your
help to write it up to an acceptable standard to be a full BIP.

### 3. The problem

Everybody wants value. Miners want to maximise revenue from fees (and
we presume, to minimise block size). Consumers need transaction
reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both.
As the operational safety of transactions is limited, so is consumer
confidence as they realise the issue and, accordingly, uptake is
limited. Fees are artificially inflated due to bandwidth limitations
while failing to provide a full confirmation service for all valid
transactions.

Current fee recommendations provide no satisfaction for transaction
reliability and, as Bitcoin scales, this will worsen.

Transactions are included in blocks by miners using whatever basis they
prefer. We expect that this is usually a fee-based priority. However,
even transactions with a valid fee may be left in the transaction pool
for some time. As transaction bandwidth becomes an issue, not even
extreme fees can ensure a transaction is processed in a timely manner
or at all.

Bitcoin must be a fully scalable and reliable service, providing full
transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is
acceptable to allow eventual transaction confirmation should be removed
from the protocol and also from the user interface.

Bitcoin should be capable of reliably and inexpensively processing
casual transactions, and also priority processing of fee paying at
auction for priority transactions in the shortest possible timeframe.

### 4. Solution summary

#### Main solution

Provide each valid transaction in the mempool with an individual
transaction priority each time before choosing transactions to include
in the current block. The priority being a function of the fee (on a
curve), and the time waiting in the transaction pool (also on a curve)
out to n days (n = 60 days ?), and extending past n days. The value for
fee on a curve may need an upper limit. The transaction priority to
serve as the likelihood of a transaction being included in the current
block, and for determining the order in which transactions are tried to
see if they will be included.

Nodes will need to keep track of when a transaction is first seen. It
is satisfactory for each node to do this independently provided the
full mempool and information survives node restart. If there is a more
reliable way to determine when a transaction was first seen on the
network then it should be utilised.

> My current default installation of Bitcoin Core v0.15.1 does not
currently seem to save and load the mempool on restart, despite the
notes in the command line options panel that the default for
persistmempool is 1. In the debug panel, some 90,000 transactions
before restart, some 200 odd shortly after. Manually setting
persistmempool=1 in the conf file does not seem to make any difference.
Perhaps it is operating as expected and I am not sure what to observe,
but does not seem to be observably saving and loading the mempool on
restart. This will need to be resolved.

Use a dynamic target block size to make the current block. This marks a
shift from using block size or weight to a count of transactions.
Determine the target block size using; pre-rollout(current average
valid transaction pool size) x ( 1 / (144 x n days ) ) = number of
transactions to be included in the current block. The block created
should be a minimum 1MB in size regardless if the target block size is
lower.

If the created block size consistently contains too few transactions
and the number of new transactions created is continuously greater than
the block size will accommodate then I expect eventually ageing
transactions will be over-represented as a portion of the block
contents. Once another new node conforming to the proposal makes a
block, the block size will be proportionately larger as the transaction
pool has grown. If block size is too large on average then this will
shrink the transaction pool.

Miners will likely want to conform to the proposal, since making blocks
larger than necessary makes more room in each block potentially
lowering the highest fees paid for priority service. Always making
blocks smaller than the proposal requires will in time lower the
utility value of Bitcoin, a different situation but akin to the
current. Transactions will still always confirm but with longer and
longer wait periods. The auction at the front of the queue for priority
will be destroyed as there will be eventually no room in blocks besides
ageing transations and, there will be little value paying higher than
the minimum fee. Obviously, neither of these scenarios are in a miner's
interests.

Without a consensus as to what size dynamic block to create,
enforcement of dynamic block size is not currently possible. It may be
possible for a consensus to be formed in the future but here I cannot
speculate. I can only suggest that it is in the interest of Bitcoin as
a whole and, in the interest of each node to conform to the proposal.
Some nodes failing to conform to the proposed requirements of dynamic
size or transaction priority in this proposal will not be destructive
to the operation of the proposal.

If necessary, nodes that have not yet adopted the proposal will just
continue to create standard fixed size unordered blocks, although, if
the current mechanisms of block validation include the fixed block size
then it is unlikely that these nodes will be able to validate the
blockchain going forward. In this case a hard fork and a full transfer
to the new method should be required. If dynamic blocks with ordered
transactions will be valid to existing nodes then only a soft fork is
required. There is no proposed change to the internal construction of
blocks, only to the block size and using an ordered method of
transaction selection.

> The default value for mempoolexpiry in Bitcoin Core may in future
need to be adjusted to match something more than n days or, perhaps
using less than n = 14 days may be a more sensible approach?

All block created with dynamic size should be verified to ensure
conformity to a probability distribution curve resulting from the
priority method. Since the input is a probability, the output should
conform to a probability distribution.

The curves used for the priority of transactions would have to be
appropriate. Perhaps a mathematician with experience in probability can
develop the right formulae. My thinking is a steep curve. I suppose
that the probability of all transactions should probably account for a
sufficient number of inclusions that the target block size is met on
average although, it may not always be. As a suggestion, consider
including some dust or zero-fee transactions to pad if each valid
transaction is tried and the target block size is not yet met, highest
BTC transaction value first?

**Explanation of the operation of priority:**

> If transaction priority is, for example, a number between one (low)
and one-hundred (high) it can be directly understood as the percentage
chance in one-hundred of a transaction being included in the block.
Using probability or likelihood infers that there is some function of
random. Try the transactions in priority order from highest to lowest,
if random (100) < transaction priority then the transaction is included
until the target block size is met.

> To break it down further, if both the fee on a curve value and the
time waiting on a curve value are each a number between one and one-
hundred, a rudimentary method may be to simply multiply those two
numbers, to find the priority number. For example, a middle fee
transaction waiting thirty days (if n = 60 days) may have a value of
five for each part (yes, just five, the values are on a curve). When
multiplied that will give a priority value of twenty-five, or, a
twenty-five percent chance at that moment of being included in the
block; it will likely be included in one of the next four blocks,
getting more likely each chance. If it is still not included then the
value of time waiting will be higher, making for more probability. A
very low fee transaction would have a value for the fee of one. It
would not be until near sixty-days that the particular low fee
transaction has a high likelihood of being included in the block.

In practice it may be more useful to use numbers representative of one-
hundred for the highest fee priority curve down to a small fraction of
one for the lowest fee and, from one for a newly seen transaction up to
a proportionately high number above one-hundred for the time waiting
curve. It is truely beyond my level of math to resolve probability
curves accurately without much trial and error.

The primary reason for addressing the issue is to ensure transactional
reliability and scalability while having each valid transaction confirm
in due time.

#### Pros

* Maximizes transaction reliability.
* Overcomes transaction bandwidth limit.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability;
because of reliability, in time confidence and overall uptake are
greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is
less probability of predicting the next block. _Although this is not
necessary it is a product of the operation of this proposal._

#### Cons

* Could initially lower total transaction fees per block.
* Must be first be programmed.

#### Pre-rollout

Nodes need to have at a minimum a loose understanding of the average
(since there is no consensus) size of the transaction pool as a
requirement to enable future changes to the way blocks are constructed.

A new network service should be constructed to meet this need. This
service makes no changes to any existing operation or function of the
node. Initially, Bitcoin Core is a suitable candidate.

For all operations we count only valid transactions.

**The service must:**

* Have an individual temporary (runtime permanent only) Serial Node
ID.
* Accept communication of the number of valid transactions in the
mempool of another valid Bitcoin node along with the Serial Node ID of
the node whose value is provided.
* Disconnect the service from any non-Bitcoin node. Bitcoin Core may
handle this already?
* Expire any value not updated for k minutes (k = 30 minutes?).
* Broadcast all mempool information the node has every m minutes (m =
10 minutes?), including its own.
* Nodes own mempool information should not be broadcast or used in
calculation until the node has been up long enough for the mempool to
normalise for at least o minutes (o = 300 minutes ?)
* Alternatively, if loading nodes own full mempool from disk on node
restart (o = 30 minutes ?)
* Only new or updated mempool values should be transmitted to the
same node. Updated includes updated with no change.
* All known mempool information must survive node restart.
* If the nodes own mempool is not normalised and network information
is not available to calculate an average just display zero.
* Internally, the average transaction pool size must return the
calculated average if an average is available or, if none is available
just the number of valid transactions in the node's own mempool
regardless if it is normalised.

Bitcoin Core must use all collated information on mempool size to
calculate a figure for the average mempool size.

The calculated figure should be displayed in the appropriate place in
the Debug window alongside the text Network average transactions.

Consideration must be given before development of the network bandwidth
this would require. All programming must be consistent with the current
operation and conventions of Bitcoin Core. Methods must work on all
platforms.

As this new service does not affect any existing service or feature of
Bitcoin or Bitcoin Core, this can technically be programmed now and
included in Bitcoin Core at any time.

### 5. Solution operation

This is a simplistic view of the operation. The actual operation will
need to be determined accurately in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each valid transaction in the
mempool.
3. Select transactions to include in the current block using
probability in transaction priority order until the target block size
is met. If target block size is not met, include dust and zero-fee
transactions to pad.
4. Solve block.
5. Broadcast the current block when it is solved.
6. Block is received.
7. Block verification process.
8. Accept/reject block based on verification result.
9. Repeat.

### 6. Closing comments

It may be possible to verify blocks conform to the proposal by showing
that the probability for all transactions included in the block
statistically conforms to a probability distribution curve, *if* the
individual transaction priority can be recreated. I am not that deep
into the mathematics; however, it may also be possible to use a similar
method to do this just based on the fee, that statistically, the block
conforms to a fee distribution. Any dust and zero-fee transactions
would have to be ignored. This solution needs a competent mathematician
with experience in probability and statistical distribution.

It is trivial to this proposal to offer that a node provides the next
block size with a block when it is solved. I am not sure that this
creates any actual benefit since the provided next block size is only
one node's view, as it is the node may seemingly just as well use its
own view and create the block. Providing a next block size only adds
additional complexity to the required operation, however, perhaps
providing the next block size is not trivial in what is accomplished
and the feature can be included in the operation.

Instead of the pre-rollout network service providing data as to valid
transactions in mempool, it could directly provide data as to the
suggested next block size if that is preferred, using a similar
operation as is suggested now and averaging all received suggested next
block sizes.

It may be foreseeable in the future for Bitcoin to operate with a
network of dedicated full blockchain & mempool servers. This would not
be without challenges to overcome but would offer several benefits,
including to the operation of this proposal, and especially as the RAM
and storage requirements of a full node grows. It is easy to foresee
that in just another seven years of operation a Bitcoin Full Node will
require at least 300GB of storage and, if the mempool only doubles in
size, over 1GB of RAM.

There has been some concern expressed over spam and very low fee
transactions, and an infinite block size resulting. I hope that for
those concerned using the dust level addresses the issue, especially as
the value of Bitcoin grows.

Notwithstanding this proposal, all blocks including those with dynamic
size each have limited transaction space per block. This proposal
results in a fee for priority service auction, where the probability of
a transaction to be included in limited space in the next available
block is auctioned to the highest bidders and all other transactions
must wait until they reach priority by ageing to gain significant
probability. Under this proposal the mempool can grow quite large while
the confirmation service continues in a stable and reliable manner.
Several incentives for attackers are removed, where there is no longer
multiple potential incentives for unnecessarily filling blocks or
flooding the mempool with transactions, whether such transactions are
fraudulent, valid or, otherwise. Adoption of this proposal and
adherence results in a reliable, stable fee paying transaction
confirmation service and a beneficial auction.

This proposal is necessary. I implore, at the very least, that we use
some method that validates full transaction reliability and enables
scalability of Bitcoin. If not this proposal, an alternative.

I have done as much with this proposal as I feel that I am able so far
but continue to take your feedback.

Regards,
Damian Williamson

[![Creative Commons License](https://i.creativecommons.org/l/by-sa/4.0/
88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/)
<span xmlns:dct="http://purl.org/dc/terms/"
href="http://purl.org/dc/dcmitype/Text" property="dct:title"
rel="dct:type">BIP Proposal: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks</span> by [Damian Williamson
&lt;***@live.com.au&gt;](http://thekingjameshrmh.tumblr.com/post/1
68948530950/bip-proposal-utpfotib-use-transaction-priority-for-order)
is licensed under a [Creative Commons Attribution-ShareAlike 4.0
International License](http://creativecommons.org/licenses/by-sa/4.0/).
Based on a work at https://lists.linuxfoundation.org/pipermail/bitcoin-
dev/2017-
December/015371.html](https://lists.linuxfoundation.org/pipermail/bitco
in-dev/2017-December/015371.html).
Permissions beyond the scope of this license may be available at [https
://opensource.org/licenses/BSD-3-
Clause](https://opensource.org/licenses/BSD-3-Clause).
Damian Williamson via bitcoin-dev
2018-01-19 23:25:43 UTC
Permalink
An example curve:

The curve curently described here is ineffective at acheiving the requirements. It seems to be not nearly steep enough resulting in too many inclusions (as it happens, this may not metter - needs further evaluation) and, the lower end values seem problematically small but, results in a number between 100 for the highest fee BTC/KB and a small fraction of 1 for the lowest. This math needs to be improved.


pf(tx) = sin2((fx-(fl-0.00000001))/(fh-(fl-0.00000001))*1.570796326795)*100


pf is the calculated priority number for the fee for tx the specifc valid transaction.
fx is the fee in BTC/KB for the specific transaction.
fl is the lowest valid fee in BTC/KB currently in the nodes mempool.
fh is the highest valid fee in BTC/KB currently in the nodes mempool.

________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Thursday, 4 January 2018 8:01:10 PM
To: Bitcoin Protocol Discussion
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


This proposal has a new update, mostly minor edits. Additionally, I had a logic flaw in the hard fork / soft fork declaration statement. The specific terms of the CC-BY-SA-4.0 licence the document is published under have now been updated to include additional permissions available under the MIT licence.


Recently, on Twitter:

I am looking for a capable analyst/programmer to work on a BIP proposal as co-author. Will need to format several Full BIP's per these BIP process requirements: ( https://github.com/bitcoin/bips/blob/master/bip-0002.mediawiki ) from a BIP Proposal, being two initially for non-consensus full-interoperable pre-rollout on peer service layer & API/RPC layer and, a reference implementation for Bitcoin Core per: ( https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md ). Interested parties please reply via this list thread: ( https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015485.html ) #Bitcoin #BIP


Regards,

Damian Williamson


________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Monday, 1 January 2018 10:04 PM
To: bitcoin-***@lists.linuxfoundation.org
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Happy New Year all.

This proposal has been further amended with several minor changes and a
few additions.

I believe that all known issues raised so far have been sufficiently
addressed. Either that or, I still have more work to do.

## BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks

Schema:
##########
Document: BIP Proposal
Title: UTPFOTIB - Use Transaction Priority For Ordering Transactions In
Blocks
Published: 26-12-2017
Revised: 01-01-2018
Author: Damian Williamson <***@live.com.au>
Licence: Creative Commons Attribution-ShareAlike 4.0 International
License.
URL: http://thekingjameshrmh.tumblr.com/post/168948530950/bip-proposal-
utpfotib-use-transaction-priority-for-order
##########

### 1. Abstract

This document proposes to address the issue of transactional
reliability in Bitcoin, where valid transactions may be stuck in the
transaction pool for extended periods or never confirm.

There are two key issues to be resolved to achieve this:

1. The current transaction bandwidth limit.
2. The current ad-hoc methods of including transactions in blocks
resulting in variable and confusing confirmation times for valid
transactions, including transactions with a valid fee that may never
confirm.

It is important with any change to protect the value of fees as these
will eventually be the only payment that miners receive. Rather than an
auction model for limited bandwidth, the proposal results in a fee for
priority service auction model.

It would not be true to suggest that all feedback received so far has
been entirely positive although, most of it has been constructive.

The previous threads for this proposal are available here:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/s
ubject.html

In all parts of this proposal, references to a transaction, a valid
transaction, a transaction with a valid fee, a valid fee, etc. is
defined as any transaction that is otherwise valid with a fee of at
least 0.00001000 BTC/KB as defined as the dust level, interpreting from
Bitcoin Core GUI. Transactions with a fee lower than this rate are
considered dust.

In all parts of this proposal, dust and zero-fee transactions are
always ignored and/or excluded unless specifically mentioned.

It is generally assumed that miners currently prefer to include
transactions with higher fees.

### 2. The need for this proposal

We all must learn to admit that transaction bandwidth is still lurking
as a serious issue for the operation, reliability, safety, consumer
acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day
target confirmation from the fee recommendation. That transaction has
still not confirmed after now more than six days - even waiting twice
as long seems quite reasonable to me (note for accuracy: it did
eventually confirm). That transaction is a valid transaction; it is not
rubbish, junk or, spam. Under the current model with transaction
bandwidth limitation, the longer a transaction waits, the less likely
it is ever to confirm due to rising transaction numbers and being
pushed back by transactions with rising fees.

I argue that no transactions with fees above the dust level are rubbish
or junk, only some zero fee transactions might be spam. Having an ever-
increasing number of valid transactions that do not confirm as more new
transactions with higher fees are created is the opposite of operating
a robust, reliable transaction system.

While the miners have discovered a gold mine, it is the service they
provide that is valuable. If the service is unreliable they are not
worth the gold that they mine. This is reflected in the value of
Bitcoin.

Business cannot operate with a model where transactions may or may not
confirm. Even a business choosing a modest fee has no guarantee that
their valid transaction will not be shuffled down by new transactions
to the realm of never confirming after it is created. Consumers also
will not accept this model as Bitcoin expands. If Bitcoin cannot be a
reliable payment system for confirmed transactions then consumers, by
and large, will simply not accept the model once they understand.
Bitcoin will be a dirty payment system, and this will kill the value of
Bitcoin.

Under the current system, a minority of transactions will eventually be
the lucky few who have fees high enough to escape being pushed down the
list.

Once there are more than x transactions (transaction bandwidth limit)
every ten minutes, only those choosing twenty-minute confirmation (2
blocks) from the fee recommendations will have initially at most a
fifty percent chance of ever having their payment confirm by the time
2x transactions is reached. Presently, not even using fee
recommendations can ensure a sufficiently high fee is paid to ensure
transaction confirmation.

I also argue that the current auction model for limited transaction
bandwidth is wrong, is not suitable for a reliable transaction system
and, is wrong for Bitcoin. All transactions with valid fees must
confirm in due time. Currently, Bitcoin is not a safe way to send
payments.

I do not believe that consumers and business are against paying fees,
even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of
Bitcoin. The time to resolve issues in commerce is before they become
great big issues. The time to resolve this issue is now. We must have
the foresight to identify and resolve problems before they trip us
over. Simply doubling block sizes every so often is reactionary and is
not a reliable permanent solution.

I have written this proposal for a technical solution but, need your
help to write it up to an acceptable standard to be a full BIP.

### 3. The problem

Everybody wants value. Miners want to maximise revenue from fees (and
we presume, to minimise block size). Consumers need transaction
reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both.
As the operational safety of transactions is limited, so is consumer
confidence as they realise the issue and, accordingly, uptake is
limited. Fees are artificially inflated due to bandwidth limitations
while failing to provide a full confirmation service for all valid
transactions.

Current fee recommendations provide no satisfaction for transaction
reliability and, as Bitcoin scales, this will worsen.

Transactions are included in blocks by miners using whatever basis they
prefer. We expect that this is usually a fee-based priority. However,
even transactions with a valid fee may be left in the transaction pool
for some time. As transaction bandwidth becomes an issue, not even
extreme fees can ensure a transaction is processed in a timely manner
or at all.

Bitcoin must be a fully scalable and reliable service, providing full
transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is
acceptable to allow eventual transaction confirmation should be removed
from the protocol and also from the user interface.

Bitcoin should be capable of reliably and inexpensively processing
casual transactions, and also priority processing of fee paying at
auction for priority transactions in the shortest possible timeframe.

### 4. Solution summary

#### Main solution

Provide each valid transaction in the mempool with an individual
transaction priority each time before choosing transactions to include
in the current block. The priority being a function of the fee (on a
curve), and the time waiting in the transaction pool (also on a curve)
out to n days (n = 60 days ?), and extending past n days. The value for
fee on a curve may need an upper limit. The transaction priority to
serve as the likelihood of a transaction being included in the current
block, and for determining the order in which transactions are tried to
see if they will be included.

Nodes will need to keep track of when a transaction is first seen. It
is satisfactory for each node to do this independently provided the
full mempool and information survives node restart. If there is a more
reliable way to determine when a transaction was first seen on the
network then it should be utilised.

> My current default installation of Bitcoin Core v0.15.1 does not
currently seem to save and load the mempool on restart, despite the
notes in the command line options panel that the default for
persistmempool is 1. In the debug panel, some 90,000 transactions
before restart, some 200 odd shortly after. Manually setting
persistmempool=1 in the conf file does not seem to make any difference.
Perhaps it is operating as expected and I am not sure what to observe,
but does not seem to be observably saving and loading the mempool on
restart. This will need to be resolved.

Use a dynamic target block size to make the current block. This marks a
shift from using block size or weight to a count of transactions.
Determine the target block size using; pre-rollout(current average
valid transaction pool size) x ( 1 / (144 x n days ) ) = number of
transactions to be included in the current block. The block created
should be a minimum 1MB in size regardless if the target block size is
lower.

If the created block size consistently contains too few transactions
and the number of new transactions created is continuously greater than
the block size will accommodate then I expect eventually ageing
transactions will be over-represented as a portion of the block
contents. Once another new node conforming to the proposal makes a
block, the block size will be proportionately larger as the transaction
pool has grown. If block size is too large on average then this will
shrink the transaction pool.

Miners will likely want to conform to the proposal, since making blocks
larger than necessary makes more room in each block potentially
lowering the highest fees paid for priority service. Always making
blocks smaller than the proposal requires will in time lower the
utility value of Bitcoin, a different situation but akin to the
current. Transactions will still always confirm but with longer and
longer wait periods. The auction at the front of the queue for priority
will be destroyed as there will be eventually no room in blocks besides
ageing transations and, there will be little value paying higher than
the minimum fee. Obviously, neither of these scenarios are in a miner's
interests.

Without a consensus as to what size dynamic block to create,
enforcement of dynamic block size is not currently possible. It may be
possible for a consensus to be formed in the future but here I cannot
speculate. I can only suggest that it is in the interest of Bitcoin as
a whole and, in the interest of each node to conform to the proposal.
Some nodes failing to conform to the proposed requirements of dynamic
size or transaction priority in this proposal will not be destructive
to the operation of the proposal.

If necessary, nodes that have not yet adopted the proposal will just
continue to create standard fixed size unordered blocks, although, if
the current mechanisms of block validation include the fixed block size
then it is unlikely that these nodes will be able to validate the
blockchain going forward. In this case a hard fork and a full transfer
to the new method should be required. If dynamic blocks with ordered
transactions will be valid to existing nodes then only a soft fork is
required. There is no proposed change to the internal construction of
blocks, only to the block size and using an ordered method of
transaction selection.

> The default value for mempoolexpiry in Bitcoin Core may in future
need to be adjusted to match something more than n days or, perhaps
using less than n = 14 days may be a more sensible approach?

All block created with dynamic size should be verified to ensure
conformity to a probability distribution curve resulting from the
priority method. Since the input is a probability, the output should
conform to a probability distribution.

The curves used for the priority of transactions would have to be
appropriate. Perhaps a mathematician with experience in probability can
develop the right formulae. My thinking is a steep curve. I suppose
that the probability of all transactions should probably account for a
sufficient number of inclusions that the target block size is met on
average although, it may not always be. As a suggestion, consider
including some dust or zero-fee transactions to pad if each valid
transaction is tried and the target block size is not yet met, highest
BTC transaction value first?

**Explanation of the operation of priority:**

> If transaction priority is, for example, a number between one (low)
and one-hundred (high) it can be directly understood as the percentage
chance in one-hundred of a transaction being included in the block.
Using probability or likelihood infers that there is some function of
random. Try the transactions in priority order from highest to lowest,
if random (100) < transaction priority then the transaction is included
until the target block size is met.

> To break it down further, if both the fee on a curve value and the
time waiting on a curve value are each a number between one and one-
hundred, a rudimentary method may be to simply multiply those two
numbers, to find the priority number. For example, a middle fee
transaction waiting thirty days (if n = 60 days) may have a value of
five for each part (yes, just five, the values are on a curve). When
multiplied that will give a priority value of twenty-five, or, a
twenty-five percent chance at that moment of being included in the
block; it will likely be included in one of the next four blocks,
getting more likely each chance. If it is still not included then the
value of time waiting will be higher, making for more probability. A
very low fee transaction would have a value for the fee of one. It
would not be until near sixty-days that the particular low fee
transaction has a high likelihood of being included in the block.

In practice it may be more useful to use numbers representative of one-
hundred for the highest fee priority curve down to a small fraction of
one for the lowest fee and, from one for a newly seen transaction up to
a proportionately high number above one-hundred for the time waiting
curve. It is truely beyond my level of math to resolve probability
curves accurately without much trial and error.

The primary reason for addressing the issue is to ensure transactional
reliability and scalability while having each valid transaction confirm
in due time.

#### Pros

* Maximizes transaction reliability.
* Overcomes transaction bandwidth limit.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability;
because of reliability, in time confidence and overall uptake are
greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is
less probability of predicting the next block. _Although this is not
necessary it is a product of the operation of this proposal._

#### Cons

* Could initially lower total transaction fees per block.
* Must be first be programmed.

#### Pre-rollout

Nodes need to have at a minimum a loose understanding of the average
(since there is no consensus) size of the transaction pool as a
requirement to enable future changes to the way blocks are constructed.

A new network service should be constructed to meet this need. This
service makes no changes to any existing operation or function of the
node. Initially, Bitcoin Core is a suitable candidate.

For all operations we count only valid transactions.

**The service must:**

* Have an individual temporary (runtime permanent only) Serial Node
ID.
* Accept communication of the number of valid transactions in the
mempool of another valid Bitcoin node along with the Serial Node ID of
the node whose value is provided.
* Disconnect the service from any non-Bitcoin node. Bitcoin Core may
handle this already?
* Expire any value not updated for k minutes (k = 30 minutes?).
* Broadcast all mempool information the node has every m minutes (m =
10 minutes?), including its own.
* Nodes own mempool information should not be broadcast or used in
calculation until the node has been up long enough for the mempool to
normalise for at least o minutes (o = 300 minutes ?)
* Alternatively, if loading nodes own full mempool from disk on node
restart (o = 30 minutes ?)
* Only new or updated mempool values should be transmitted to the
same node. Updated includes updated with no change.
* All known mempool information must survive node restart.
* If the nodes own mempool is not normalised and network information
is not available to calculate an average just display zero.
* Internally, the average transaction pool size must return the
calculated average if an average is available or, if none is available
just the number of valid transactions in the node's own mempool
regardless if it is normalised.

Bitcoin Core must use all collated information on mempool size to
calculate a figure for the average mempool size.

The calculated figure should be displayed in the appropriate place in
the Debug window alongside the text Network average transactions.

Consideration must be given before development of the network bandwidth
this would require. All programming must be consistent with the current
operation and conventions of Bitcoin Core. Methods must work on all
platforms.

As this new service does not affect any existing service or feature of
Bitcoin or Bitcoin Core, this can technically be programmed now and
included in Bitcoin Core at any time.

### 5. Solution operation

This is a simplistic view of the operation. The actual operation will
need to be determined accurately in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each valid transaction in the
mempool.
3. Select transactions to include in the current block using
probability in transaction priority order until the target block size
is met. If target block size is not met, include dust and zero-fee
transactions to pad.
4. Solve block.
5. Broadcast the current block when it is solved.
6. Block is received.
7. Block verification process.
8. Accept/reject block based on verification result.
9. Repeat.

### 6. Closing comments

It may be possible to verify blocks conform to the proposal by showing
that the probability for all transactions included in the block
statistically conforms to a probability distribution curve, *if* the
individual transaction priority can be recreated. I am not that deep
into the mathematics; however, it may also be possible to use a similar
method to do this just based on the fee, that statistically, the block
conforms to a fee distribution. Any dust and zero-fee transactions
would have to be ignored. This solution needs a competent mathematician
with experience in probability and statistical distribution.

It is trivial to this proposal to offer that a node provides the next
block size with a block when it is solved. I am not sure that this
creates any actual benefit since the provided next block size is only
one node's view, as it is the node may seemingly just as well use its
own view and create the block. Providing a next block size only adds
additional complexity to the required operation, however, perhaps
providing the next block size is not trivial in what is accomplished
and the feature can be included in the operation.

Instead of the pre-rollout network service providing data as to valid
transactions in mempool, it could directly provide data as to the
suggested next block size if that is preferred, using a similar
operation as is suggested now and averaging all received suggested next
block sizes.

It may be foreseeable in the future for Bitcoin to operate with a
network of dedicated full blockchain & mempool servers. This would not
be without challenges to overcome but would offer several benefits,
including to the operation of this proposal, and especially as the RAM
and storage requirements of a full node grows. It is easy to foresee
that in just another seven years of operation a Bitcoin Full Node will
require at least 300GB of storage and, if the mempool only doubles in
size, over 1GB of RAM.

There has been some concern expressed over spam and very low fee
transactions, and an infinite block size resulting. I hope that for
those concerned using the dust level addresses the issue, especially as
the value of Bitcoin grows.

Notwithstanding this proposal, all blocks including those with dynamic
size each have limited transaction space per block. This proposal
results in a fee for priority service auction, where the probability of
a transaction to be included in limited space in the next available
block is auctioned to the highest bidders and all other transactions
must wait until they reach priority by ageing to gain significant
probability. Under this proposal the mempool can grow quite large while
the confirmation service continues in a stable and reliable manner.
Several incentives for attackers are removed, where there is no longer
multiple potential incentives for unnecessarily filling blocks or
flooding the mempool with transactions, whether such transactions are
fraudulent, valid or, otherwise. Adoption of this proposal and
adherence results in a reliable, stable fee paying transaction
confirmation service and a beneficial auction.

This proposal is necessary. I implore, at the very least, that we use
some method that validates full transaction reliability and enables
scalability of Bitcoin. If not this proposal, an alternative.

I have done as much with this proposal as I feel that I am able so far
but continue to take your feedback.

Regards,
Damian Williamson

[![Creative Commons License](https://i.creativecommons.org/l/by-sa/4.0/
88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/)
<span xmlns:dct="http://purl.org/dc/terms/"
href="http://purl.org/dc/dcmitype/Text" property="dct:title"
rel="dct:type">BIP Proposal: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks</span> by [Damian Williamson
&lt;***@live.com.au&gt;](http://thekingjameshrmh.tumblr.com/post/1
68948530950/bip-proposal-utpfotib-use-transaction-priority-for-order)
is licensed under a [Creative Commons Attribution-ShareAlike 4.0
International License](http://creativecommons.org/licenses/by-sa/4.0/).
Based on a work at https://lists.linuxfoundation.org/pipermail/bitcoin-
dev/2017-
December/015371.html](https://lists.linuxfoundation.org/pipermail/bitco
in-dev/2017-December/015371.html).
Permissions beyond the scope of this license may be available at [https
://opensource.org/licenses/BSD-3-
Clause](https://opensource.org/licenses/BSD-3-Clause).
Damian Williamson via bitcoin-dev
2018-01-20 12:04:20 UTC
Permalink
Tried a different approach for the curves, would appreciate it if someone has the energy to work on this and help me to resolve it a bit more scientifically:


p(tx) = (((((fx - (fl - 0.00000001)) / (fh - (fl - 0.00000001))) * 100) + 1) ^ y) + (((((wx - 0.9) / ((86400 * n) - 0.9)) * 100) + 1) ^ y)

p is the calculated priority number for tx the specific valid transaction.
fx is the fee in BTC/KB for the specific transaction.
fl is the lowest valid fee in BTC/KB currently in the nodes mempool.
fh is the highest valid fee in BTC/KB currently in the nodes mempool.
wx is the current wait in seconds for tx the specific valid transaction.
n is the number of days maximum wait consensus value.
y can be 10 or, y can be a further developed to be a formula based on the number of required inclusions to vary the steepness of the curve as the mempool size varies.

In the next step, the random value must be:
if random(101^y) < p then transaction is included;

Regards,
Damian Williamson


________________________________
From: Damian Williamson <***@live.com.au>
Sent: Saturday, 20 January 2018 10:25:43 AM
To: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


An example curve:

The curve curently described here is ineffective at acheiving the requirements. It seems to be not nearly steep enough resulting in too many inclusions (as it happens, this may not metter - needs further evaluation) and, the lower end values seem problematically small but, results in a number between 100 for the highest fee BTC/KB and a small fraction of 1 for the lowest. This math needs to be improved.


pf(tx) = sin2((fx-(fl-0.00000001))/(fh-(fl-0.00000001))*1.570796326795)*100


pf is the calculated priority number for the fee for tx the specifc valid transaction.
fx is the fee in BTC/KB for the specific transaction.
fl is the lowest valid fee in BTC/KB currently in the nodes mempool.
fh is the highest valid fee in BTC/KB currently in the nodes mempool.

________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Thursday, 4 January 2018 8:01:10 PM
To: Bitcoin Protocol Discussion
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


This proposal has a new update, mostly minor edits. Additionally, I had a logic flaw in the hard fork / soft fork declaration statement. The specific terms of the CC-BY-SA-4.0 licence the document is published under have now been updated to include additional permissions available under the MIT licence.


Recently, on Twitter:

I am looking for a capable analyst/programmer to work on a BIP proposal as co-author. Will need to format several Full BIP's per these BIP process requirements: ( https://github.com/bitcoin/bips/blob/master/bip-0002.mediawiki ) from a BIP Proposal, being two initially for non-consensus full-interoperable pre-rollout on peer service layer & API/RPC layer and, a reference implementation for Bitcoin Core per: ( https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md ). Interested parties please reply via this list thread: ( https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015485.html ) #Bitcoin #BIP


Regards,

Damian Williamson


________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Monday, 1 January 2018 10:04 PM
To: bitcoin-***@lists.linuxfoundation.org
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Happy New Year all.

This proposal has been further amended with several minor changes and a
few additions.

I believe that all known issues raised so far have been sufficiently
addressed. Either that or, I still have more work to do.

## BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks

Schema:
##########
Document: BIP Proposal
Title: UTPFOTIB - Use Transaction Priority For Ordering Transactions In
Blocks
Published: 26-12-2017
Revised: 01-01-2018
Author: Damian Williamson <***@live.com.au>
Licence: Creative Commons Attribution-ShareAlike 4.0 International
License.
URL: http://thekingjameshrmh.tumblr.com/post/168948530950/bip-proposal-
utpfotib-use-transaction-priority-for-order
##########

### 1. Abstract

This document proposes to address the issue of transactional
reliability in Bitcoin, where valid transactions may be stuck in the
transaction pool for extended periods or never confirm.

There are two key issues to be resolved to achieve this:

1. The current transaction bandwidth limit.
2. The current ad-hoc methods of including transactions in blocks
resulting in variable and confusing confirmation times for valid
transactions, including transactions with a valid fee that may never
confirm.

It is important with any change to protect the value of fees as these
will eventually be the only payment that miners receive. Rather than an
auction model for limited bandwidth, the proposal results in a fee for
priority service auction model.

It would not be true to suggest that all feedback received so far has
been entirely positive although, most of it has been constructive.

The previous threads for this proposal are available here:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/s
ubject.html

In all parts of this proposal, references to a transaction, a valid
transaction, a transaction with a valid fee, a valid fee, etc. is
defined as any transaction that is otherwise valid with a fee of at
least 0.00001000 BTC/KB as defined as the dust level, interpreting from
Bitcoin Core GUI. Transactions with a fee lower than this rate are
considered dust.

In all parts of this proposal, dust and zero-fee transactions are
always ignored and/or excluded unless specifically mentioned.

It is generally assumed that miners currently prefer to include
transactions with higher fees.

### 2. The need for this proposal

We all must learn to admit that transaction bandwidth is still lurking
as a serious issue for the operation, reliability, safety, consumer
acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day
target confirmation from the fee recommendation. That transaction has
still not confirmed after now more than six days - even waiting twice
as long seems quite reasonable to me (note for accuracy: it did
eventually confirm). That transaction is a valid transaction; it is not
rubbish, junk or, spam. Under the current model with transaction
bandwidth limitation, the longer a transaction waits, the less likely
it is ever to confirm due to rising transaction numbers and being
pushed back by transactions with rising fees.

I argue that no transactions with fees above the dust level are rubbish
or junk, only some zero fee transactions might be spam. Having an ever-
increasing number of valid transactions that do not confirm as more new
transactions with higher fees are created is the opposite of operating
a robust, reliable transaction system.

While the miners have discovered a gold mine, it is the service they
provide that is valuable. If the service is unreliable they are not
worth the gold that they mine. This is reflected in the value of
Bitcoin.

Business cannot operate with a model where transactions may or may not
confirm. Even a business choosing a modest fee has no guarantee that
their valid transaction will not be shuffled down by new transactions
to the realm of never confirming after it is created. Consumers also
will not accept this model as Bitcoin expands. If Bitcoin cannot be a
reliable payment system for confirmed transactions then consumers, by
and large, will simply not accept the model once they understand.
Bitcoin will be a dirty payment system, and this will kill the value of
Bitcoin.

Under the current system, a minority of transactions will eventually be
the lucky few who have fees high enough to escape being pushed down the
list.

Once there are more than x transactions (transaction bandwidth limit)
every ten minutes, only those choosing twenty-minute confirmation (2
blocks) from the fee recommendations will have initially at most a
fifty percent chance of ever having their payment confirm by the time
2x transactions is reached. Presently, not even using fee
recommendations can ensure a sufficiently high fee is paid to ensure
transaction confirmation.

I also argue that the current auction model for limited transaction
bandwidth is wrong, is not suitable for a reliable transaction system
and, is wrong for Bitcoin. All transactions with valid fees must
confirm in due time. Currently, Bitcoin is not a safe way to send
payments.

I do not believe that consumers and business are against paying fees,
even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of
Bitcoin. The time to resolve issues in commerce is before they become
great big issues. The time to resolve this issue is now. We must have
the foresight to identify and resolve problems before they trip us
over. Simply doubling block sizes every so often is reactionary and is
not a reliable permanent solution.

I have written this proposal for a technical solution but, need your
help to write it up to an acceptable standard to be a full BIP.

### 3. The problem

Everybody wants value. Miners want to maximise revenue from fees (and
we presume, to minimise block size). Consumers need transaction
reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both.
As the operational safety of transactions is limited, so is consumer
confidence as they realise the issue and, accordingly, uptake is
limited. Fees are artificially inflated due to bandwidth limitations
while failing to provide a full confirmation service for all valid
transactions.

Current fee recommendations provide no satisfaction for transaction
reliability and, as Bitcoin scales, this will worsen.

Transactions are included in blocks by miners using whatever basis they
prefer. We expect that this is usually a fee-based priority. However,
even transactions with a valid fee may be left in the transaction pool
for some time. As transaction bandwidth becomes an issue, not even
extreme fees can ensure a transaction is processed in a timely manner
or at all.

Bitcoin must be a fully scalable and reliable service, providing full
transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is
acceptable to allow eventual transaction confirmation should be removed
from the protocol and also from the user interface.

Bitcoin should be capable of reliably and inexpensively processing
casual transactions, and also priority processing of fee paying at
auction for priority transactions in the shortest possible timeframe.

### 4. Solution summary

#### Main solution

Provide each valid transaction in the mempool with an individual
transaction priority each time before choosing transactions to include
in the current block. The priority being a function of the fee (on a
curve), and the time waiting in the transaction pool (also on a curve)
out to n days (n = 60 days ?), and extending past n days. The value for
fee on a curve may need an upper limit. The transaction priority to
serve as the likelihood of a transaction being included in the current
block, and for determining the order in which transactions are tried to
see if they will be included.

Nodes will need to keep track of when a transaction is first seen. It
is satisfactory for each node to do this independently provided the
full mempool and information survives node restart. If there is a more
reliable way to determine when a transaction was first seen on the
network then it should be utilised.

> My current default installation of Bitcoin Core v0.15.1 does not
currently seem to save and load the mempool on restart, despite the
notes in the command line options panel that the default for
persistmempool is 1. In the debug panel, some 90,000 transactions
before restart, some 200 odd shortly after. Manually setting
persistmempool=1 in the conf file does not seem to make any difference.
Perhaps it is operating as expected and I am not sure what to observe,
but does not seem to be observably saving and loading the mempool on
restart. This will need to be resolved.

Use a dynamic target block size to make the current block. This marks a
shift from using block size or weight to a count of transactions.
Determine the target block size using; pre-rollout(current average
valid transaction pool size) x ( 1 / (144 x n days ) ) = number of
transactions to be included in the current block. The block created
should be a minimum 1MB in size regardless if the target block size is
lower.

If the created block size consistently contains too few transactions
and the number of new transactions created is continuously greater than
the block size will accommodate then I expect eventually ageing
transactions will be over-represented as a portion of the block
contents. Once another new node conforming to the proposal makes a
block, the block size will be proportionately larger as the transaction
pool has grown. If block size is too large on average then this will
shrink the transaction pool.

Miners will likely want to conform to the proposal, since making blocks
larger than necessary makes more room in each block potentially
lowering the highest fees paid for priority service. Always making
blocks smaller than the proposal requires will in time lower the
utility value of Bitcoin, a different situation but akin to the
current. Transactions will still always confirm but with longer and
longer wait periods. The auction at the front of the queue for priority
will be destroyed as there will be eventually no room in blocks besides
ageing transations and, there will be little value paying higher than
the minimum fee. Obviously, neither of these scenarios are in a miner's
interests.

Without a consensus as to what size dynamic block to create,
enforcement of dynamic block size is not currently possible. It may be
possible for a consensus to be formed in the future but here I cannot
speculate. I can only suggest that it is in the interest of Bitcoin as
a whole and, in the interest of each node to conform to the proposal.
Some nodes failing to conform to the proposed requirements of dynamic
size or transaction priority in this proposal will not be destructive
to the operation of the proposal.

If necessary, nodes that have not yet adopted the proposal will just
continue to create standard fixed size unordered blocks, although, if
the current mechanisms of block validation include the fixed block size
then it is unlikely that these nodes will be able to validate the
blockchain going forward. In this case a hard fork and a full transfer
to the new method should be required. If dynamic blocks with ordered
transactions will be valid to existing nodes then only a soft fork is
required. There is no proposed change to the internal construction of
blocks, only to the block size and using an ordered method of
transaction selection.

> The default value for mempoolexpiry in Bitcoin Core may in future
need to be adjusted to match something more than n days or, perhaps
using less than n = 14 days may be a more sensible approach?

All block created with dynamic size should be verified to ensure
conformity to a probability distribution curve resulting from the
priority method. Since the input is a probability, the output should
conform to a probability distribution.

The curves used for the priority of transactions would have to be
appropriate. Perhaps a mathematician with experience in probability can
develop the right formulae. My thinking is a steep curve. I suppose
that the probability of all transactions should probably account for a
sufficient number of inclusions that the target block size is met on
average although, it may not always be. As a suggestion, consider
including some dust or zero-fee transactions to pad if each valid
transaction is tried and the target block size is not yet met, highest
BTC transaction value first?

**Explanation of the operation of priority:**

> If transaction priority is, for example, a number between one (low)
and one-hundred (high) it can be directly understood as the percentage
chance in one-hundred of a transaction being included in the block.
Using probability or likelihood infers that there is some function of
random. Try the transactions in priority order from highest to lowest,
if random (100) < transaction priority then the transaction is included
until the target block size is met.

> To break it down further, if both the fee on a curve value and the
time waiting on a curve value are each a number between one and one-
hundred, a rudimentary method may be to simply multiply those two
numbers, to find the priority number. For example, a middle fee
transaction waiting thirty days (if n = 60 days) may have a value of
five for each part (yes, just five, the values are on a curve). When
multiplied that will give a priority value of twenty-five, or, a
twenty-five percent chance at that moment of being included in the
block; it will likely be included in one of the next four blocks,
getting more likely each chance. If it is still not included then the
value of time waiting will be higher, making for more probability. A
very low fee transaction would have a value for the fee of one. It
would not be until near sixty-days that the particular low fee
transaction has a high likelihood of being included in the block.

In practice it may be more useful to use numbers representative of one-
hundred for the highest fee priority curve down to a small fraction of
one for the lowest fee and, from one for a newly seen transaction up to
a proportionately high number above one-hundred for the time waiting
curve. It is truely beyond my level of math to resolve probability
curves accurately without much trial and error.

The primary reason for addressing the issue is to ensure transactional
reliability and scalability while having each valid transaction confirm
in due time.

#### Pros

* Maximizes transaction reliability.
* Overcomes transaction bandwidth limit.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability;
because of reliability, in time confidence and overall uptake are
greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is
less probability of predicting the next block. _Although this is not
necessary it is a product of the operation of this proposal._

#### Cons

* Could initially lower total transaction fees per block.
* Must be first be programmed.

#### Pre-rollout

Nodes need to have at a minimum a loose understanding of the average
(since there is no consensus) size of the transaction pool as a
requirement to enable future changes to the way blocks are constructed.

A new network service should be constructed to meet this need. This
service makes no changes to any existing operation or function of the
node. Initially, Bitcoin Core is a suitable candidate.

For all operations we count only valid transactions.

**The service must:**

* Have an individual temporary (runtime permanent only) Serial Node
ID.
* Accept communication of the number of valid transactions in the
mempool of another valid Bitcoin node along with the Serial Node ID of
the node whose value is provided.
* Disconnect the service from any non-Bitcoin node. Bitcoin Core may
handle this already?
* Expire any value not updated for k minutes (k = 30 minutes?).
* Broadcast all mempool information the node has every m minutes (m =
10 minutes?), including its own.
* Nodes own mempool information should not be broadcast or used in
calculation until the node has been up long enough for the mempool to
normalise for at least o minutes (o = 300 minutes ?)
* Alternatively, if loading nodes own full mempool from disk on node
restart (o = 30 minutes ?)
* Only new or updated mempool values should be transmitted to the
same node. Updated includes updated with no change.
* All known mempool information must survive node restart.
* If the nodes own mempool is not normalised and network information
is not available to calculate an average just display zero.
* Internally, the average transaction pool size must return the
calculated average if an average is available or, if none is available
just the number of valid transactions in the node's own mempool
regardless if it is normalised.

Bitcoin Core must use all collated information on mempool size to
calculate a figure for the average mempool size.

The calculated figure should be displayed in the appropriate place in
the Debug window alongside the text Network average transactions.

Consideration must be given before development of the network bandwidth
this would require. All programming must be consistent with the current
operation and conventions of Bitcoin Core. Methods must work on all
platforms.

As this new service does not affect any existing service or feature of
Bitcoin or Bitcoin Core, this can technically be programmed now and
included in Bitcoin Core at any time.

### 5. Solution operation

This is a simplistic view of the operation. The actual operation will
need to be determined accurately in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each valid transaction in the
mempool.
3. Select transactions to include in the current block using
probability in transaction priority order until the target block size
is met. If target block size is not met, include dust and zero-fee
transactions to pad.
4. Solve block.
5. Broadcast the current block when it is solved.
6. Block is received.
7. Block verification process.
8. Accept/reject block based on verification result.
9. Repeat.

### 6. Closing comments

It may be possible to verify blocks conform to the proposal by showing
that the probability for all transactions included in the block
statistically conforms to a probability distribution curve, *if* the
individual transaction priority can be recreated. I am not that deep
into the mathematics; however, it may also be possible to use a similar
method to do this just based on the fee, that statistically, the block
conforms to a fee distribution. Any dust and zero-fee transactions
would have to be ignored. This solution needs a competent mathematician
with experience in probability and statistical distribution.

It is trivial to this proposal to offer that a node provides the next
block size with a block when it is solved. I am not sure that this
creates any actual benefit since the provided next block size is only
one node's view, as it is the node may seemingly just as well use its
own view and create the block. Providing a next block size only adds
additional complexity to the required operation, however, perhaps
providing the next block size is not trivial in what is accomplished
and the feature can be included in the operation.

Instead of the pre-rollout network service providing data as to valid
transactions in mempool, it could directly provide data as to the
suggested next block size if that is preferred, using a similar
operation as is suggested now and averaging all received suggested next
block sizes.

It may be foreseeable in the future for Bitcoin to operate with a
network of dedicated full blockchain & mempool servers. This would not
be without challenges to overcome but would offer several benefits,
including to the operation of this proposal, and especially as the RAM
and storage requirements of a full node grows. It is easy to foresee
that in just another seven years of operation a Bitcoin Full Node will
require at least 300GB of storage and, if the mempool only doubles in
size, over 1GB of RAM.

There has been some concern expressed over spam and very low fee
transactions, and an infinite block size resulting. I hope that for
those concerned using the dust level addresses the issue, especially as
the value of Bitcoin grows.

Notwithstanding this proposal, all blocks including those with dynamic
size each have limited transaction space per block. This proposal
results in a fee for priority service auction, where the probability of
a transaction to be included in limited space in the next available
block is auctioned to the highest bidders and all other transactions
must wait until they reach priority by ageing to gain significant
probability. Under this proposal the mempool can grow quite large while
the confirmation service continues in a stable and reliable manner.
Several incentives for attackers are removed, where there is no longer
multiple potential incentives for unnecessarily filling blocks or
flooding the mempool with transactions, whether such transactions are
fraudulent, valid or, otherwise. Adoption of this proposal and
adherence results in a reliable, stable fee paying transaction
confirmation service and a beneficial auction.

This proposal is necessary. I implore, at the very least, that we use
some method that validates full transaction reliability and enables
scalability of Bitcoin. If not this proposal, an alternative.

I have done as much with this proposal as I feel that I am able so far
but continue to take your feedback.

Regards,
Damian Williamson

[![Creative Commons License](https://i.creativecommons.org/l/by-sa/4.0/
88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/)
<span xmlns:dct="http://purl.org/dc/terms/"
href="http://purl.org/dc/dcmitype/Text" property="dct:title"
rel="dct:type">BIP Proposal: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks</span> by [Damian Williamson
&lt;***@live.com.au&gt;](http://thekingjameshrmh.tumblr.com/post/1
68948530950/bip-proposal-utpfotib-use-transaction-priority-for-order)
is licensed under a [Creative Commons Attribution-ShareAlike 4.0
International License](http://creativecommons.org/licenses/by-sa/4.0/).
Based on a work at https://lists.linuxfoundation.org/pipermail/bitcoin-
dev/2017-
December/015371.html](https://lists.linuxfoundation.org/pipermail/bitco
in-dev/2017-December/015371.html).
Permissions beyond the scope of this license may be available at [https
://opensource.org/licenses/BSD-3-
Clause](https://opensource.org/licenses/BSD-3-Clause).
Alan Evans via bitcoin-dev
2018-01-20 14:46:41 UTC
Permalink
I don't see any modifications to the proposal that addresses the issue that
miners will always be free to choose their own priority that a few people
brought up before.

I understand you think it's in the miners best long-term interest to follow
these rules, but even if a miner agrees with you, if that miner thinks the
other miners are following the fee curve, they will know it makes no
overall difference if they cheat (you can't prove how long a miner has had
a transaction in their mempool).

The opportunity to cheat, the anonymity of mining, the low negative effect
of a single cheating instance, all combined with a financial incentive to
cheat means that cheating will be rife.


On Sat, Jan 20, 2018 at 8:04 AM, Damian Williamson via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> Tried a different approach for the curves, would appreciate it if someone
> has the energy to work on this and help me to resolve it a bit more
> scientifically:
>
>
> p(tx) = (((((fx - (fl - 0.00000001)) / (fh - (fl - 0.00000001))) * 100) +
> 1) ^ y) + (((((wx - 0.9) / ((86400 * n) - 0.9)) * 100) + 1) ^ y)
>
> p is the calculated priority number for tx the specific valid transaction.
> fx is the fee in BTC/KB for the specific transaction.
> fl is the lowest valid fee in BTC/KB currently in the nodes mempool.
> fh is the highest valid fee in BTC/KB currently in the nodes mempool.
> wx is the current wait in seconds for tx the specific valid transaction.
> n is the number of days maximum wait consensus value.
> y can be 10 or, y can be a further developed to be a formula based on the
> number of required inclusions to vary the steepness of the curve as the
> mempool size varies.
>
> In the next step, the random value must be:
> if random(101^y) < p then transaction is included;
>
> Regards,
> Damian Williamson
>
> ------------------------------
> *From:* Damian Williamson <***@live.com.au>
> *Sent:* Saturday, 20 January 2018 10:25:43 AM
> *To:* Bitcoin Protocol Discussion
> *Subject:* Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
> Transaction Priority For Ordering Transactions In Blocks
>
>
> An example curve:
>
> The curve curently described here is ineffective at acheiving the
> requirements. It seems to be not nearly steep enough resulting in too many
> inclusions (as it happens, this may not metter - needs further evaluation)
> and, the lower end values seem problematically small but, results in a
> number between 100 for the highest fee BTC/KB and a small fraction of 1 for
> the lowest. This math needs to be improved.
>
>
> pf(tx) = sin2((fx-(fl-0.00000001))/(fh-(fl-0.00000001))*1.
> 570796326795)*100
>
>
> pf is the calculated priority number for the fee for tx the specifc valid
> transaction.
> fx is the fee in BTC/KB for the specific transaction.
> fl is the lowest valid fee in BTC/KB currently in the nodes mempool.
> fh is the highest valid fee in BTC/KB currently in the nodes mempool.
>
> ------------------------------
> *From:* bitcoin-dev-***@lists.linuxfoundation.org <
> bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian
> Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
> *Sent:* Thursday, 4 January 2018 8:01:10 PM
> *To:* Bitcoin Protocol Discussion
> *Subject:* [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
> Transaction Priority For Ordering Transactions In Blocks
>
>
> This proposal has a new update, mostly minor edits. Additionally, I had a
> logic flaw in the hard fork / soft fork declaration statement. The specific
> terms of the CC-BY-SA-4.0 licence the document is published under have
> now been updated to include additional permissions available under the MIT
> licence.
>
>
> Recently, on Twitter:
>
> I am looking for a capable analyst/programmer to work on a BIP proposal as
> co-author. Will need to format several Full BIP's per these BIP process
> requirements: ( https://github.com/bitcoin/bips/blob/master/bip-0002.
> mediawiki ) from a BIP Proposal, being two initially for non-consensus
> full-interoperable pre-rollout on peer service layer & API/RPC layer and, a
> reference implementation for Bitcoin Core per: (
> https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md ).
> Interested parties please reply via this list thread: (
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/
> 2017-December/015485.html ) #Bitcoin #BIP
>
>
> Regards,
>
> Damian Williamson
>
>
> ------------------------------
> *From:* bitcoin-dev-***@lists.linuxfoundation.org <
> bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Damian
> Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
> *Sent:* Monday, 1 January 2018 10:04 PM
> *To:* bitcoin-***@lists.linuxfoundation.org
> *Subject:* [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use
> Transaction Priority For Ordering Transactions In Blocks
>
> Happy New Year all.
>
> This proposal has been further amended with several minor changes and a
> few additions.
>
> I believe that all known issues raised so far have been sufficiently
> addressed. Either that or, I still have more work to do.
>
> ## BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For
> Ordering Transactions In Blocks
>
> Schema:
> ##########
> Document: BIP Proposal
> Title: UTPFOTIB - Use Transaction Priority For Ordering Transactions In
> Blocks
> Published: 26-12-2017
> Revised: 01-01-2018
> Author: Damian Williamson <***@live.com.au>
> Licence: Creative Commons Attribution-ShareAlike 4.0 International
> License.
> URL: http://thekingjameshrmh.tumblr.com/post/168948530950/bip-proposal-
> utpfotib-use-transaction-priority-for-order
> ##########
>
> ### 1. Abstract
>
> This document proposes to address the issue of transactional
> reliability in Bitcoin, where valid transactions may be stuck in the
> transaction pool for extended periods or never confirm.
>
> There are two key issues to be resolved to achieve this:
>
> 1. The current transaction bandwidth limit.
> 2. The current ad-hoc methods of including transactions in blocks
> resulting in variable and confusing confirmation times for valid
> transactions, including transactions with a valid fee that may never
> confirm.
>
> It is important with any change to protect the value of fees as these
> will eventually be the only payment that miners receive. Rather than an
> auction model for limited bandwidth, the proposal results in a fee for
> priority service auction model.
>
> It would not be true to suggest that all feedback received so far has
> been entirely positive although, most of it has been constructive.
>
> The previous threads for this proposal are available here:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/s
> ubject.html
>
> In all parts of this proposal, references to a transaction, a valid
> transaction, a transaction with a valid fee, a valid fee, etc. is
> defined as any transaction that is otherwise valid with a fee of at
> least 0.00001000 BTC/KB as defined as the dust level, interpreting from
> Bitcoin Core GUI. Transactions with a fee lower than this rate are
> considered dust.
>
> In all parts of this proposal, dust and zero-fee transactions are
> always ignored and/or excluded unless specifically mentioned.
>
> It is generally assumed that miners currently prefer to include
> transactions with higher fees.
>
> ### 2. The need for this proposal
>
> We all must learn to admit that transaction bandwidth is still lurking
> as a serious issue for the operation, reliability, safety, consumer
> acceptance, uptake and, for the value of Bitcoin.
>
> I recently sent a payment which was not urgent so; I chose three-day
> target confirmation from the fee recommendation. That transaction has
> still not confirmed after now more than six days - even waiting twice
> as long seems quite reasonable to me (note for accuracy: it did
> eventually confirm). That transaction is a valid transaction; it is not
> rubbish, junk or, spam. Under the current model with transaction
> bandwidth limitation, the longer a transaction waits, the less likely
> it is ever to confirm due to rising transaction numbers and being
> pushed back by transactions with rising fees.
>
> I argue that no transactions with fees above the dust level are rubbish
> or junk, only some zero fee transactions might be spam. Having an ever-
> increasing number of valid transactions that do not confirm as more new
> transactions with higher fees are created is the opposite of operating
> a robust, reliable transaction system.
>
> While the miners have discovered a gold mine, it is the service they
> provide that is valuable. If the service is unreliable they are not
> worth the gold that they mine. This is reflected in the value of
> Bitcoin.
>
> Business cannot operate with a model where transactions may or may not
> confirm. Even a business choosing a modest fee has no guarantee that
> their valid transaction will not be shuffled down by new transactions
> to the realm of never confirming after it is created. Consumers also
> will not accept this model as Bitcoin expands. If Bitcoin cannot be a
> reliable payment system for confirmed transactions then consumers, by
> and large, will simply not accept the model once they understand.
> Bitcoin will be a dirty payment system, and this will kill the value of
> Bitcoin.
>
> Under the current system, a minority of transactions will eventually be
> the lucky few who have fees high enough to escape being pushed down the
> list.
>
> Once there are more than x transactions (transaction bandwidth limit)
> every ten minutes, only those choosing twenty-minute confirmation (2
> blocks) from the fee recommendations will have initially at most a
> fifty percent chance of ever having their payment confirm by the time
> 2x transactions is reached. Presently, not even using fee
> recommendations can ensure a sufficiently high fee is paid to ensure
> transaction confirmation.
>
> I also argue that the current auction model for limited transaction
> bandwidth is wrong, is not suitable for a reliable transaction system
> and, is wrong for Bitcoin. All transactions with valid fees must
> confirm in due time. Currently, Bitcoin is not a safe way to send
> payments.
>
> I do not believe that consumers and business are against paying fees,
> even high fees. What is required is operational reliability.
>
> This great issue needs to be resolved for the safety and reliability of
> Bitcoin. The time to resolve issues in commerce is before they become
> great big issues. The time to resolve this issue is now. We must have
> the foresight to identify and resolve problems before they trip us
> over. Simply doubling block sizes every so often is reactionary and is
> not a reliable permanent solution.
>
> I have written this proposal for a technical solution but, need your
> help to write it up to an acceptable standard to be a full BIP.
>
> ### 3. The problem
>
> Everybody wants value. Miners want to maximise revenue from fees (and
> we presume, to minimise block size). Consumers need transaction
> reliability and, (we presume) want low fees.
>
> The current transaction bandwidth limit is a limiting factor for both.
> As the operational safety of transactions is limited, so is consumer
> confidence as they realise the issue and, accordingly, uptake is
> limited. Fees are artificially inflated due to bandwidth limitations
> while failing to provide a full confirmation service for all valid
> transactions.
>
> Current fee recommendations provide no satisfaction for transaction
> reliability and, as Bitcoin scales, this will worsen.
>
> Transactions are included in blocks by miners using whatever basis they
> prefer. We expect that this is usually a fee-based priority. However,
> even transactions with a valid fee may be left in the transaction pool
> for some time. As transaction bandwidth becomes an issue, not even
> extreme fees can ensure a transaction is processed in a timely manner
> or at all.
>
> Bitcoin must be a fully scalable and reliable service, providing full
> transaction confirmation for every valid transaction.
>
> The possibility to send a transaction with a fee lower than one that is
> acceptable to allow eventual transaction confirmation should be removed
> from the protocol and also from the user interface.
>
> Bitcoin should be capable of reliably and inexpensively processing
> casual transactions, and also priority processing of fee paying at
> auction for priority transactions in the shortest possible timeframe.
>
> ### 4. Solution summary
>
> #### Main solution
>
> Provide each valid transaction in the mempool with an individual
> transaction priority each time before choosing transactions to include
> in the current block. The priority being a function of the fee (on a
> curve), and the time waiting in the transaction pool (also on a curve)
> out to n days (n = 60 days ?), and extending past n days. The value for
> fee on a curve may need an upper limit. The transaction priority to
> serve as the likelihood of a transaction being included in the current
> block, and for determining the order in which transactions are tried to
> see if they will be included.
>
> Nodes will need to keep track of when a transaction is first seen. It
> is satisfactory for each node to do this independently provided the
> full mempool and information survives node restart. If there is a more
> reliable way to determine when a transaction was first seen on the
> network then it should be utilised.
>
> > My current default installation of Bitcoin Core v0.15.1 does not
> currently seem to save and load the mempool on restart, despite the
> notes in the command line options panel that the default for
> persistmempool is 1. In the debug panel, some 90,000 transactions
> before restart, some 200 odd shortly after. Manually setting
> persistmempool=1 in the conf file does not seem to make any difference.
> Perhaps it is operating as expected and I am not sure what to observe,
> but does not seem to be observably saving and loading the mempool on
> restart. This will need to be resolved.
>
> Use a dynamic target block size to make the current block. This marks a
> shift from using block size or weight to a count of transactions.
> Determine the target block size using; pre-rollout(current average
> valid transaction pool size) x ( 1 / (144 x n days ) ) = number of
> transactions to be included in the current block. The block created
> should be a minimum 1MB in size regardless if the target block size is
> lower.
>
> If the created block size consistently contains too few transactions
> and the number of new transactions created is continuously greater than
> the block size will accommodate then I expect eventually ageing
> transactions will be over-represented as a portion of the block
> contents. Once another new node conforming to the proposal makes a
> block, the block size will be proportionately larger as the transaction
> pool has grown. If block size is too large on average then this will
> shrink the transaction pool.
>
> Miners will likely want to conform to the proposal, since making blocks
> larger than necessary makes more room in each block potentially
> lowering the highest fees paid for priority service. Always making
> blocks smaller than the proposal requires will in time lower the
> utility value of Bitcoin, a different situation but akin to the
> current. Transactions will still always confirm but with longer and
> longer wait periods. The auction at the front of the queue for priority
> will be destroyed as there will be eventually no room in blocks besides
> ageing transations and, there will be little value paying higher than
> the minimum fee. Obviously, neither of these scenarios are in a miner's
> interests.
>
> Without a consensus as to what size dynamic block to create,
> enforcement of dynamic block size is not currently possible. It may be
> possible for a consensus to be formed in the future but here I cannot
> speculate. I can only suggest that it is in the interest of Bitcoin as
> a whole and, in the interest of each node to conform to the proposal.
> Some nodes failing to conform to the proposed requirements of dynamic
> size or transaction priority in this proposal will not be destructive
> to the operation of the proposal.
>
> If necessary, nodes that have not yet adopted the proposal will just
> continue to create standard fixed size unordered blocks, although, if
> the current mechanisms of block validation include the fixed block size
> then it is unlikely that these nodes will be able to validate the
> blockchain going forward. In this case a hard fork and a full transfer
> to the new method should be required. If dynamic blocks with ordered
> transactions will be valid to existing nodes then only a soft fork is
> required. There is no proposed change to the internal construction of
> blocks, only to the block size and using an ordered method of
> transaction selection.
>
> > The default value for mempoolexpiry in Bitcoin Core may in future
> need to be adjusted to match something more than n days or, perhaps
> using less than n = 14 days may be a more sensible approach?
>
> All block created with dynamic size should be verified to ensure
> conformity to a probability distribution curve resulting from the
> priority method. Since the input is a probability, the output should
> conform to a probability distribution.
>
> The curves used for the priority of transactions would have to be
> appropriate. Perhaps a mathematician with experience in probability can
> develop the right formulae. My thinking is a steep curve. I suppose
> that the probability of all transactions should probably account for a
> sufficient number of inclusions that the target block size is met on
> average although, it may not always be. As a suggestion, consider
> including some dust or zero-fee transactions to pad if each valid
> transaction is tried and the target block size is not yet met, highest
> BTC transaction value first?
>
> **Explanation of the operation of priority:**
>
> > If transaction priority is, for example, a number between one (low)
> and one-hundred (high) it can be directly understood as the percentage
> chance in one-hundred of a transaction being included in the block.
> Using probability or likelihood infers that there is some function of
> random. Try the transactions in priority order from highest to lowest,
> if random (100) < transaction priority then the transaction is included
> until the target block size is met.
>
> > To break it down further, if both the fee on a curve value and the
> time waiting on a curve value are each a number between one and one-
> hundred, a rudimentary method may be to simply multiply those two
> numbers, to find the priority number. For example, a middle fee
> transaction waiting thirty days (if n = 60 days) may have a value of
> five for each part (yes, just five, the values are on a curve). When
> multiplied that will give a priority value of twenty-five, or, a
> twenty-five percent chance at that moment of being included in the
> block; it will likely be included in one of the next four blocks,
> getting more likely each chance. If it is still not included then the
> value of time waiting will be higher, making for more probability. A
> very low fee transaction would have a value for the fee of one. It
> would not be until near sixty-days that the particular low fee
> transaction has a high likelihood of being included in the block.
>
> In practice it may be more useful to use numbers representative of one-
> hundred for the highest fee priority curve down to a small fraction of
> one for the lowest fee and, from one for a newly seen transaction up to
> a proportionately high number above one-hundred for the time waiting
> curve. It is truely beyond my level of math to resolve probability
> curves accurately without much trial and error.
>
> The primary reason for addressing the issue is to ensure transactional
> reliability and scalability while having each valid transaction confirm
> in due time.
>
> #### Pros
>
> * Maximizes transaction reliability.
> * Overcomes transaction bandwidth limit.
> * Fully scalable.
> * Maximizes possibility for consumer and business uptake.
> * Maximizes total fees paid per block without reducing reliability;
> because of reliability, in time confidence and overall uptake are
> greater; therefore, more transactions.
> * Market determines fee paid for transaction priority.
> * Fee recommendations work all the way out to 30 days or greater.
> * Provides additional block entropy; greater security since there is
> less probability of predicting the next block. _Although this is not
> necessary it is a product of the operation of this proposal._
>
> #### Cons
>
> * Could initially lower total transaction fees per block.
> * Must be first be programmed.
>
> #### Pre-rollout
>
> Nodes need to have at a minimum a loose understanding of the average
> (since there is no consensus) size of the transaction pool as a
> requirement to enable future changes to the way blocks are constructed.
>
> A new network service should be constructed to meet this need. This
> service makes no changes to any existing operation or function of the
> node. Initially, Bitcoin Core is a suitable candidate.
>
> For all operations we count only valid transactions.
>
> **The service must:**
>
> * Have an individual temporary (runtime permanent only) Serial Node
> ID.
> * Accept communication of the number of valid transactions in the
> mempool of another valid Bitcoin node along with the Serial Node ID of
> the node whose value is provided.
> * Disconnect the service from any non-Bitcoin node. Bitcoin Core may
> handle this already?
> * Expire any value not updated for k minutes (k = 30 minutes?).
> * Broadcast all mempool information the node has every m minutes (m =
> 10 minutes?), including its own.
> * Nodes own mempool information should not be broadcast or used in
> calculation until the node has been up long enough for the mempool to
> normalise for at least o minutes (o = 300 minutes ?)
> * Alternatively, if loading nodes own full mempool from disk on node
> restart (o = 30 minutes ?)
> * Only new or updated mempool values should be transmitted to the
> same node. Updated includes updated with no change.
> * All known mempool information must survive node restart.
> * If the nodes own mempool is not normalised and network information
> is not available to calculate an average just display zero.
> * Internally, the average transaction pool size must return the
> calculated average if an average is available or, if none is available
> just the number of valid transactions in the node's own mempool
> regardless if it is normalised.
>
> Bitcoin Core must use all collated information on mempool size to
> calculate a figure for the average mempool size.
>
> The calculated figure should be displayed in the appropriate place in
> the Debug window alongside the text Network average transactions.
>
> Consideration must be given before development of the network bandwidth
> this would require. All programming must be consistent with the current
> operation and conventions of Bitcoin Core. Methods must work on all
> platforms.
>
> As this new service does not affect any existing service or feature of
> Bitcoin or Bitcoin Core, this can technically be programmed now and
> included in Bitcoin Core at any time.
>
> ### 5. Solution operation
>
> This is a simplistic view of the operation. The actual operation will
> need to be determined accurately in a spec for the programmer.
>
> 1. Determine the target block size for the current block.
> 2. Assign a transaction priority to each valid transaction in the
> mempool.
> 3. Select transactions to include in the current block using
> probability in transaction priority order until the target block size
> is met. If target block size is not met, include dust and zero-fee
> transactions to pad.
> 4. Solve block.
> 5. Broadcast the current block when it is solved.
> 6. Block is received.
> 7. Block verification process.
> 8. Accept/reject block based on verification result.
> 9. Repeat.
>
> ### 6. Closing comments
>
> It may be possible to verify blocks conform to the proposal by showing
> that the probability for all transactions included in the block
> statistically conforms to a probability distribution curve, *if* the
> individual transaction priority can be recreated. I am not that deep
> into the mathematics; however, it may also be possible to use a similar
> method to do this just based on the fee, that statistically, the block
> conforms to a fee distribution. Any dust and zero-fee transactions
> would have to be ignored. This solution needs a competent mathematician
> with experience in probability and statistical distribution.
>
> It is trivial to this proposal to offer that a node provides the next
> block size with a block when it is solved. I am not sure that this
> creates any actual benefit since the provided next block size is only
> one node's view, as it is the node may seemingly just as well use its
> own view and create the block. Providing a next block size only adds
> additional complexity to the required operation, however, perhaps
> providing the next block size is not trivial in what is accomplished
> and the feature can be included in the operation.
>
> Instead of the pre-rollout network service providing data as to valid
> transactions in mempool, it could directly provide data as to the
> suggested next block size if that is preferred, using a similar
> operation as is suggested now and averaging all received suggested next
> block sizes.
>
> It may be foreseeable in the future for Bitcoin to operate with a
> network of dedicated full blockchain & mempool servers. This would not
> be without challenges to overcome but would offer several benefits,
> including to the operation of this proposal, and especially as the RAM
> and storage requirements of a full node grows. It is easy to foresee
> that in just another seven years of operation a Bitcoin Full Node will
> require at least 300GB of storage and, if the mempool only doubles in
> size, over 1GB of RAM.
>
> There has been some concern expressed over spam and very low fee
> transactions, and an infinite block size resulting. I hope that for
> those concerned using the dust level addresses the issue, especially as
> the value of Bitcoin grows.
>
> Notwithstanding this proposal, all blocks including those with dynamic
> size each have limited transaction space per block. This proposal
> results in a fee for priority service auction, where the probability of
> a transaction to be included in limited space in the next available
> block is auctioned to the highest bidders and all other transactions
> must wait until they reach priority by ageing to gain significant
> probability. Under this proposal the mempool can grow quite large while
> the confirmation service continues in a stable and reliable manner.
> Several incentives for attackers are removed, where there is no longer
> multiple potential incentives for unnecessarily filling blocks or
> flooding the mempool with transactions, whether such transactions are
> fraudulent, valid or, otherwise. Adoption of this proposal and
> adherence results in a reliable, stable fee paying transaction
> confirmation service and a beneficial auction.
>
> This proposal is necessary. I implore, at the very least, that we use
> some method that validates full transaction reliability and enables
> scalability of Bitcoin. If not this proposal, an alternative.
>
> I have done as much with this proposal as I feel that I am able so far
> but continue to take your feedback.
>
> Regards,
> Damian Williamson
>
> [![Creative Commons License](https://i.creativecommons.org/l/by-sa/4.0/
> 88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/)
> <span xmlns:dct="http://purl.org/dc/terms/"
> href="http://purl.org/dc/dcmitype/Text" property="dct:title"
> rel="dct:type">BIP Proposal: UTPFOTIB - Use Transaction Priority For
> Ordering Transactions In Blocks</span> by [Damian Williamson
> &lt;***@live.com.au&gt;](http://thekingjameshrmh.tumblr.com/post/1
> 68948530950/bip-proposal-utpfotib-use-transaction-priority-for-order)
> is licensed under a [Creative Commons Attribution-ShareAlike 4.0
> International License](http://creativecommons.org/licenses/by-sa/4.0/).
> Based on a work at https://lists.linuxfoundation.org/pipermail/bitcoin-
> dev/2017-
> December/015371.html](https://lists.linuxfoundation.org/pipermail/bitco
> in-dev/2017-December/015371.html).
> Permissions beyond the scope of this license may be available at [https
> ://opensource.org/licenses/BSD-3-
> Clause](https://opensource.org/licenses/BSD-3-Clause).
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Damian Williamson via bitcoin-dev
2018-01-21 05:49:25 UTC
Permalink
Good afternoon Alan,


It is stated in the proposal that it is intended for blocks to be validated as the output of the priority method, to ensure that they conform. Unfortunately, the math necessary for this sort of statistical function is outside the scope of my formal education and I will need to rely on someone to develop what is necessary. If it does turn out that this is not ultimately possible then I suppose at that stage the proposal would need to be abandoned since I agree - validation must be necessary. Blocks created with cheating should be too unlikely.


>All block created with dynamic size should be verified to ensure
conformity to a probability distribution curve resulting from the
priority method. Since the input is a probability, the output should
conform to a probability distribution.


Regards,

Damian Williamson

________________________________
From: Alan Evans <***@gmail.com>
Sent: Sunday, 21 January 2018 1:46:41 AM
To: Damian Williamson; Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

I don't see any modifications to the proposal that addresses the issue that miners will always be free to choose their own priority that a few people brought up before.

I understand you think it's in the miners best long-term interest to follow these rules, but even if a miner agrees with you, if that miner thinks the other miners are following the fee curve, they will know it makes no overall difference if they cheat (you can't prove how long a miner has had a transaction in their mempool).

The opportunity to cheat, the anonymity of mining, the low negative effect of a single cheating instance, all combined with a financial incentive to cheat means that cheating will be rife.


On Sat, Jan 20, 2018 at 8:04 AM, Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:

Tried a different approach for the curves, would appreciate it if someone has the energy to work on this and help me to resolve it a bit more scientifically:


p(tx) = (((((fx - (fl - 0.00000001)) / (fh - (fl - 0.00000001))) * 100) + 1) ^ y) + (((((wx - 0.9) / ((86400 * n) - 0.9)) * 100) + 1) ^ y)

p is the calculated priority number for tx the specific valid transaction.
fx is the fee in BTC/KB for the specific transaction.
fl is the lowest valid fee in BTC/KB currently in the nodes mempool.
fh is the highest valid fee in BTC/KB currently in the nodes mempool.
wx is the current wait in seconds for tx the specific valid transaction.
n is the number of days maximum wait consensus value.
y can be 10 or, y can be a further developed to be a formula based on the number of required inclusions to vary the steepness of the curve as the mempool size varies.

In the next step, the random value must be:
if random(101^y) < p then transaction is included;

Regards,
Damian Williamson


________________________________
From: Damian Williamson <***@live.com.au<mailto:***@live.com.au>>
Sent: Saturday, 20 January 2018 10:25:43 AM
To: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


An example curve:

The curve curently described here is ineffective at acheiving the requirements. It seems to be not nearly steep enough resulting in too many inclusions (as it happens, this may not metter - needs further evaluation) and, the lower end values seem problematically small but, results in a number between 100 for the highest fee BTC/KB and a small fraction of 1 for the lowest. This math needs to be improved.


pf(tx) = sin2((fx-(fl-0.00000001))/(fh-(fl-0.00000001))*1.570796326795)*100


pf is the calculated priority number for the fee for tx the specifc valid transaction.
fx is the fee in BTC/KB for the specific transaction.
fl is the lowest valid fee in BTC/KB currently in the nodes mempool.
fh is the highest valid fee in BTC/KB currently in the nodes mempool.

________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org> <bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org>> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
Sent: Thursday, 4 January 2018 8:01:10 PM
To: Bitcoin Protocol Discussion
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks


This proposal has a new update, mostly minor edits. Additionally, I had a logic flaw in the hard fork / soft fork declaration statement. The specific terms of the CC-BY-SA-4.0 licence the document is published under have now been updated to include additional permissions available under the MIT licence.


Recently, on Twitter:

I am looking for a capable analyst/programmer to work on a BIP proposal as co-author. Will need to format several Full BIP's per these BIP process requirements: ( https://github.com/bitcoin/bips/blob/master/bip-0002.mediawiki ) from a BIP Proposal, being two initially for non-consensus full-interoperable pre-rollout on peer service layer & API/RPC layer and, a reference implementation for Bitcoin Core per: ( https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md ). Interested parties please reply via this list thread: ( https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015485.html ) #Bitcoin #BIP


Regards,

Damian Williamson


________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org> <bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org>> on behalf of Damian Williamson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
Sent: Monday, 1 January 2018 10:04 PM
To: bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
Subject: [bitcoin-dev] BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For Ordering Transactions In Blocks

Happy New Year all.

This proposal has been further amended with several minor changes and a
few additions.

I believe that all known issues raised so far have been sufficiently
addressed. Either that or, I still have more work to do.

## BIP Proposal: Revised: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks

Schema:
##########
Document: BIP Proposal
Title: UTPFOTIB - Use Transaction Priority For Ordering Transactions In
Blocks
Published: 26-12-2017
Revised: 01-01-2018
Author: Damian Williamson <***@live.com.au<mailto:***@live.com.au>>
Licence: Creative Commons Attribution-ShareAlike 4.0 International
License.
URL: http://thekingjameshrmh.tumblr.com/post/168948530950/bip-proposal-
utpfotib-use-transaction-priority-for-order
##########

### 1. Abstract

This document proposes to address the issue of transactional
reliability in Bitcoin, where valid transactions may be stuck in the
transaction pool for extended periods or never confirm.

There are two key issues to be resolved to achieve this:

1. The current transaction bandwidth limit.
2. The current ad-hoc methods of including transactions in blocks
resulting in variable and confusing confirmation times for valid
transactions, including transactions with a valid fee that may never
confirm.

It is important with any change to protect the value of fees as these
will eventually be the only payment that miners receive. Rather than an
auction model for limited bandwidth, the proposal results in a fee for
priority service auction model.

It would not be true to suggest that all feedback received so far has
been entirely positive although, most of it has been constructive.

The previous threads for this proposal are available here:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/s
ubject.html

In all parts of this proposal, references to a transaction, a valid
transaction, a transaction with a valid fee, a valid fee, etc. is
defined as any transaction that is otherwise valid with a fee of at
least 0.00001000 BTC/KB as defined as the dust level, interpreting from
Bitcoin Core GUI. Transactions with a fee lower than this rate are
considered dust.

In all parts of this proposal, dust and zero-fee transactions are
always ignored and/or excluded unless specifically mentioned.

It is generally assumed that miners currently prefer to include
transactions with higher fees.

### 2. The need for this proposal

We all must learn to admit that transaction bandwidth is still lurking
as a serious issue for the operation, reliability, safety, consumer
acceptance, uptake and, for the value of Bitcoin.

I recently sent a payment which was not urgent so; I chose three-day
target confirmation from the fee recommendation. That transaction has
still not confirmed after now more than six days - even waiting twice
as long seems quite reasonable to me (note for accuracy: it did
eventually confirm). That transaction is a valid transaction; it is not
rubbish, junk or, spam. Under the current model with transaction
bandwidth limitation, the longer a transaction waits, the less likely
it is ever to confirm due to rising transaction numbers and being
pushed back by transactions with rising fees.

I argue that no transactions with fees above the dust level are rubbish
or junk, only some zero fee transactions might be spam. Having an ever-
increasing number of valid transactions that do not confirm as more new
transactions with higher fees are created is the opposite of operating
a robust, reliable transaction system.

While the miners have discovered a gold mine, it is the service they
provide that is valuable. If the service is unreliable they are not
worth the gold that they mine. This is reflected in the value of
Bitcoin.

Business cannot operate with a model where transactions may or may not
confirm. Even a business choosing a modest fee has no guarantee that
their valid transaction will not be shuffled down by new transactions
to the realm of never confirming after it is created. Consumers also
will not accept this model as Bitcoin expands. If Bitcoin cannot be a
reliable payment system for confirmed transactions then consumers, by
and large, will simply not accept the model once they understand.
Bitcoin will be a dirty payment system, and this will kill the value of
Bitcoin.

Under the current system, a minority of transactions will eventually be
the lucky few who have fees high enough to escape being pushed down the
list.

Once there are more than x transactions (transaction bandwidth limit)
every ten minutes, only those choosing twenty-minute confirmation (2
blocks) from the fee recommendations will have initially at most a
fifty percent chance of ever having their payment confirm by the time
2x transactions is reached. Presently, not even using fee
recommendations can ensure a sufficiently high fee is paid to ensure
transaction confirmation.

I also argue that the current auction model for limited transaction
bandwidth is wrong, is not suitable for a reliable transaction system
and, is wrong for Bitcoin. All transactions with valid fees must
confirm in due time. Currently, Bitcoin is not a safe way to send
payments.

I do not believe that consumers and business are against paying fees,
even high fees. What is required is operational reliability.

This great issue needs to be resolved for the safety and reliability of
Bitcoin. The time to resolve issues in commerce is before they become
great big issues. The time to resolve this issue is now. We must have
the foresight to identify and resolve problems before they trip us
over. Simply doubling block sizes every so often is reactionary and is
not a reliable permanent solution.

I have written this proposal for a technical solution but, need your
help to write it up to an acceptable standard to be a full BIP.

### 3. The problem

Everybody wants value. Miners want to maximise revenue from fees (and
we presume, to minimise block size). Consumers need transaction
reliability and, (we presume) want low fees.

The current transaction bandwidth limit is a limiting factor for both.
As the operational safety of transactions is limited, so is consumer
confidence as they realise the issue and, accordingly, uptake is
limited. Fees are artificially inflated due to bandwidth limitations
while failing to provide a full confirmation service for all valid
transactions.

Current fee recommendations provide no satisfaction for transaction
reliability and, as Bitcoin scales, this will worsen.

Transactions are included in blocks by miners using whatever basis they
prefer. We expect that this is usually a fee-based priority. However,
even transactions with a valid fee may be left in the transaction pool
for some time. As transaction bandwidth becomes an issue, not even
extreme fees can ensure a transaction is processed in a timely manner
or at all.

Bitcoin must be a fully scalable and reliable service, providing full
transaction confirmation for every valid transaction.

The possibility to send a transaction with a fee lower than one that is
acceptable to allow eventual transaction confirmation should be removed
from the protocol and also from the user interface.

Bitcoin should be capable of reliably and inexpensively processing
casual transactions, and also priority processing of fee paying at
auction for priority transactions in the shortest possible timeframe.

### 4. Solution summary

#### Main solution

Provide each valid transaction in the mempool with an individual
transaction priority each time before choosing transactions to include
in the current block. The priority being a function of the fee (on a
curve), and the time waiting in the transaction pool (also on a curve)
out to n days (n = 60 days ?), and extending past n days. The value for
fee on a curve may need an upper limit. The transaction priority to
serve as the likelihood of a transaction being included in the current
block, and for determining the order in which transactions are tried to
see if they will be included.

Nodes will need to keep track of when a transaction is first seen. It
is satisfactory for each node to do this independently provided the
full mempool and information survives node restart. If there is a more
reliable way to determine when a transaction was first seen on the
network then it should be utilised.

> My current default installation of Bitcoin Core v0.15.1 does not
currently seem to save and load the mempool on restart, despite the
notes in the command line options panel that the default for
persistmempool is 1. In the debug panel, some 90,000 transactions
before restart, some 200 odd shortly after. Manually setting
persistmempool=1 in the conf file does not seem to make any difference.
Perhaps it is operating as expected and I am not sure what to observe,
but does not seem to be observably saving and loading the mempool on
restart. This will need to be resolved.

Use a dynamic target block size to make the current block. This marks a
shift from using block size or weight to a count of transactions.
Determine the target block size using; pre-rollout(current average
valid transaction pool size) x ( 1 / (144 x n days ) ) = number of
transactions to be included in the current block. The block created
should be a minimum 1MB in size regardless if the target block size is
lower.

If the created block size consistently contains too few transactions
and the number of new transactions created is continuously greater than
the block size will accommodate then I expect eventually ageing
transactions will be over-represented as a portion of the block
contents. Once another new node conforming to the proposal makes a
block, the block size will be proportionately larger as the transaction
pool has grown. If block size is too large on average then this will
shrink the transaction pool.

Miners will likely want to conform to the proposal, since making blocks
larger than necessary makes more room in each block potentially
lowering the highest fees paid for priority service. Always making
blocks smaller than the proposal requires will in time lower the
utility value of Bitcoin, a different situation but akin to the
current. Transactions will still always confirm but with longer and
longer wait periods. The auction at the front of the queue for priority
will be destroyed as there will be eventually no room in blocks besides
ageing transations and, there will be little value paying higher than
the minimum fee. Obviously, neither of these scenarios are in a miner's
interests.

Without a consensus as to what size dynamic block to create,
enforcement of dynamic block size is not currently possible. It may be
possible for a consensus to be formed in the future but here I cannot
speculate. I can only suggest that it is in the interest of Bitcoin as
a whole and, in the interest of each node to conform to the proposal.
Some nodes failing to conform to the proposed requirements of dynamic
size or transaction priority in this proposal will not be destructive
to the operation of the proposal.

If necessary, nodes that have not yet adopted the proposal will just
continue to create standard fixed size unordered blocks, although, if
the current mechanisms of block validation include the fixed block size
then it is unlikely that these nodes will be able to validate the
blockchain going forward. In this case a hard fork and a full transfer
to the new method should be required. If dynamic blocks with ordered
transactions will be valid to existing nodes then only a soft fork is
required. There is no proposed change to the internal construction of
blocks, only to the block size and using an ordered method of
transaction selection.

> The default value for mempoolexpiry in Bitcoin Core may in future
need to be adjusted to match something more than n days or, perhaps
using less than n = 14 days may be a more sensible approach?

All block created with dynamic size should be verified to ensure
conformity to a probability distribution curve resulting from the
priority method. Since the input is a probability, the output should
conform to a probability distribution.

The curves used for the priority of transactions would have to be
appropriate. Perhaps a mathematician with experience in probability can
develop the right formulae. My thinking is a steep curve. I suppose
that the probability of all transactions should probably account for a
sufficient number of inclusions that the target block size is met on
average although, it may not always be. As a suggestion, consider
including some dust or zero-fee transactions to pad if each valid
transaction is tried and the target block size is not yet met, highest
BTC transaction value first?

**Explanation of the operation of priority:**

> If transaction priority is, for example, a number between one (low)
and one-hundred (high) it can be directly understood as the percentage
chance in one-hundred of a transaction being included in the block.
Using probability or likelihood infers that there is some function of
random. Try the transactions in priority order from highest to lowest,
if random (100) < transaction priority then the transaction is included
until the target block size is met.

> To break it down further, if both the fee on a curve value and the
time waiting on a curve value are each a number between one and one-
hundred, a rudimentary method may be to simply multiply those two
numbers, to find the priority number. For example, a middle fee
transaction waiting thirty days (if n = 60 days) may have a value of
five for each part (yes, just five, the values are on a curve). When
multiplied that will give a priority value of twenty-five, or, a
twenty-five percent chance at that moment of being included in the
block; it will likely be included in one of the next four blocks,
getting more likely each chance. If it is still not included then the
value of time waiting will be higher, making for more probability. A
very low fee transaction would have a value for the fee of one. It
would not be until near sixty-days that the particular low fee
transaction has a high likelihood of being included in the block.

In practice it may be more useful to use numbers representative of one-
hundred for the highest fee priority curve down to a small fraction of
one for the lowest fee and, from one for a newly seen transaction up to
a proportionately high number above one-hundred for the time waiting
curve. It is truely beyond my level of math to resolve probability
curves accurately without much trial and error.

The primary reason for addressing the issue is to ensure transactional
reliability and scalability while having each valid transaction confirm
in due time.

#### Pros

* Maximizes transaction reliability.
* Overcomes transaction bandwidth limit.
* Fully scalable.
* Maximizes possibility for consumer and business uptake.
* Maximizes total fees paid per block without reducing reliability;
because of reliability, in time confidence and overall uptake are
greater; therefore, more transactions.
* Market determines fee paid for transaction priority.
* Fee recommendations work all the way out to 30 days or greater.
* Provides additional block entropy; greater security since there is
less probability of predicting the next block. _Although this is not
necessary it is a product of the operation of this proposal._

#### Cons

* Could initially lower total transaction fees per block.
* Must be first be programmed.

#### Pre-rollout

Nodes need to have at a minimum a loose understanding of the average
(since there is no consensus) size of the transaction pool as a
requirement to enable future changes to the way blocks are constructed.

A new network service should be constructed to meet this need. This
service makes no changes to any existing operation or function of the
node. Initially, Bitcoin Core is a suitable candidate.

For all operations we count only valid transactions.

**The service must:**

* Have an individual temporary (runtime permanent only) Serial Node
ID.
* Accept communication of the number of valid transactions in the
mempool of another valid Bitcoin node along with the Serial Node ID of
the node whose value is provided.
* Disconnect the service from any non-Bitcoin node. Bitcoin Core may
handle this already?
* Expire any value not updated for k minutes (k = 30 minutes?).
* Broadcast all mempool information the node has every m minutes (m =
10 minutes?), including its own.
* Nodes own mempool information should not be broadcast or used in
calculation until the node has been up long enough for the mempool to
normalise for at least o minutes (o = 300 minutes ?)
* Alternatively, if loading nodes own full mempool from disk on node
restart (o = 30 minutes ?)
* Only new or updated mempool values should be transmitted to the
same node. Updated includes updated with no change.
* All known mempool information must survive node restart.
* If the nodes own mempool is not normalised and network information
is not available to calculate an average just display zero.
* Internally, the average transaction pool size must return the
calculated average if an average is available or, if none is available
just the number of valid transactions in the node's own mempool
regardless if it is normalised.

Bitcoin Core must use all collated information on mempool size to
calculate a figure for the average mempool size.

The calculated figure should be displayed in the appropriate place in
the Debug window alongside the text Network average transactions.

Consideration must be given before development of the network bandwidth
this would require. All programming must be consistent with the current
operation and conventions of Bitcoin Core. Methods must work on all
platforms.

As this new service does not affect any existing service or feature of
Bitcoin or Bitcoin Core, this can technically be programmed now and
included in Bitcoin Core at any time.

### 5. Solution operation

This is a simplistic view of the operation. The actual operation will
need to be determined accurately in a spec for the programmer.

1. Determine the target block size for the current block.
2. Assign a transaction priority to each valid transaction in the
mempool.
3. Select transactions to include in the current block using
probability in transaction priority order until the target block size
is met. If target block size is not met, include dust and zero-fee
transactions to pad.
4. Solve block.
5. Broadcast the current block when it is solved.
6. Block is received.
7. Block verification process.
8. Accept/reject block based on verification result.
9. Repeat.

### 6. Closing comments

It may be possible to verify blocks conform to the proposal by showing
that the probability for all transactions included in the block
statistically conforms to a probability distribution curve, *if* the
individual transaction priority can be recreated. I am not that deep
into the mathematics; however, it may also be possible to use a similar
method to do this just based on the fee, that statistically, the block
conforms to a fee distribution. Any dust and zero-fee transactions
would have to be ignored. This solution needs a competent mathematician
with experience in probability and statistical distribution.

It is trivial to this proposal to offer that a node provides the next
block size with a block when it is solved. I am not sure that this
creates any actual benefit since the provided next block size is only
one node's view, as it is the node may seemingly just as well use its
own view and create the block. Providing a next block size only adds
additional complexity to the required operation, however, perhaps
providing the next block size is not trivial in what is accomplished
and the feature can be included in the operation.

Instead of the pre-rollout network service providing data as to valid
transactions in mempool, it could directly provide data as to the
suggested next block size if that is preferred, using a similar
operation as is suggested now and averaging all received suggested next
block sizes.

It may be foreseeable in the future for Bitcoin to operate with a
network of dedicated full blockchain & mempool servers. This would not
be without challenges to overcome but would offer several benefits,
including to the operation of this proposal, and especially as the RAM
and storage requirements of a full node grows. It is easy to foresee
that in just another seven years of operation a Bitcoin Full Node will
require at least 300GB of storage and, if the mempool only doubles in
size, over 1GB of RAM.

There has been some concern expressed over spam and very low fee
transactions, and an infinite block size resulting. I hope that for
those concerned using the dust level addresses the issue, especially as
the value of Bitcoin grows.

Notwithstanding this proposal, all blocks including those with dynamic
size each have limited transaction space per block. This proposal
results in a fee for priority service auction, where the probability of
a transaction to be included in limited space in the next available
block is auctioned to the highest bidders and all other transactions
must wait until they reach priority by ageing to gain significant
probability. Under this proposal the mempool can grow quite large while
the confirmation service continues in a stable and reliable manner.
Several incentives for attackers are removed, where there is no longer
multiple potential incentives for unnecessarily filling blocks or
flooding the mempool with transactions, whether such transactions are
fraudulent, valid or, otherwise. Adoption of this proposal and
adherence results in a reliable, stable fee paying transaction
confirmation service and a beneficial auction.

This proposal is necessary. I implore, at the very least, that we use
some method that validates full transaction reliability and enables
scalability of Bitcoin. If not this proposal, an alternative.

I have done as much with this proposal as I feel that I am able so far
but continue to take your feedback.

Regards,
Damian Williamson

[![Creative Commons License](https://i.creativecommons.org/l/by-sa/4.0/
88x31.png)](http://creativecommons.org/licenses/by-sa/4.0/)
<span xmlns:dct="http://purl.org/dc/terms/"
href="http://purl.org/dc/dcmitype/Text" property="dct:title"
rel="dct:type">BIP Proposal: UTPFOTIB - Use Transaction Priority For
Ordering Transactions In Blocks</span> by [Damian Williamson
&lt;***@live.com.au<mailto:lt%***@live.com.au>&gt;](http://thekingjameshrmh.tumblr.com/post/1
68948530950/bip-proposal-utpfotib-use-transaction-priority-for-order)
is licensed under a [Creative Commons Attribution-ShareAlike 4.0
International License](http://creativecommons.org/licenses/by-sa/4.0/).
Based on a work at https://lists.linuxfoundation.org/pipermail/bitcoin-
dev/2017-
December/015371.html](https://lists.linuxfoundation.org/pipermail/bitco
in-dev/2017-December/015371.html).
Permissions beyond the scope of this license may be available at [https
://opensource.org/licenses/BSD-3-<http://opensource.org/licenses/BSD-3->
Clause](https://opensource.org/licenses/BSD-3-Clause).
Loading...