6

On the proliferation of try (and, soon, await)

 2 years ago
source link: https://forums.swift.org/t/on-the-proliferation-of-try-and-soon-await/42621/20
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
On the proliferation of try (and, soon, await)
21 / 153
Dec 2020

Since async/await tends to get used a lot for UI code, a common mistake people make is not understanding when control returns to the UI run loop, which may have the effect of making intermediate UI changes visible to the user. Knowing that every "await" in a function called from the UI thread is a temporary return to the run loop is helpful so that you know to carefully place code that changes the UI before or after that await depending on when you want those UI changes to be visible to the user.

If the call wasn't marked with "await" then you wouldn't even know to think about that, and you may have to just guess or keep checking the functions you're calling to remember whether they're async or not.

Nevin:

Furthermore, the idea of a function call possibly mutating state, has nothing to do with synchronicity. A regular old bog-standard synchronous function could easily modify the state of the class instance from which it is called.

That's true, but to be totally fair, there's a difference with async. Assuming you are able to reason about which code has access to a given class instance, it's possible to reason that a synchronous call into code where the partially-modified instance is inaccessible will not cause the instance to be re-entered and thus observed in the partially-modified state. When the call is async, that instance is open to access, not just by the callee, but by any currently-suspended code that may resume at the point of the call.

That said, you could argue that being able to reason about which code has access to a given class instance is a fantasy that's rarely fulfilled in reality… another good reason to eschew classes :wink:

For a programmer, the interesting thing is, “Does this call initiate some concurrent task, which will execute in parallel with the current codepath, and if so how can I interact with it (observe progress, cancel, get notified when it completes, etc.)?”

Just trying to get a grip on what you're saying here… IIUC, every async call (except those into the same actor? 2) is a suspension point. Doesn't that mean that each one can effectively initiate a concurrent task, since some waiting task might start when this one suspends? Also, I'm not sure about “in parallel,” if we're distinguishing parallelism and concurrency. If some other thread spawns a new thread at the moment I make a call, has that call effectively initiated some task that will execute in parallel with the current codepath?

It sounds like you're talking about initiating a task that persists past the call, to which the caller can get access… which sounds like a future to me.

All this other talk of suspension points is important to the implementation, but doesn’t directly affect the surface-level of the language. You always have to wait for a call to complete before the next line is executed, so why should some calls require an await keyword?

Well, yes, that is the question I'm putting on the table.

The new async let , however, is different. It actually introduces asynchronous execution. It brings concurrency into the language. And when you have an async let, then it makes sense to await it when you need the value.

OK, but the fact that the word await “makes sense” at that point in the code is not a good enough reason for the compiler to mandate it. It has to serve some purpose, alerting the reader of the code to… something. For example, & is mandated on (nearly all) inout arguments in order to alert the reader to mutation, which materially affects the reader's ability to understand the meaning of the code.

I argue that there's nothing about evaluating an async let that makes it more worthy of a mandated keyword than any other async call. Both have the same potential to allow re-entrant access to shared state that wouldn't be allowed if the call were not async. But since that issue seems increasingly marginal as actor isolation becomes stronger, it's not clear to me that it's worth the cost.

QuinceyMorris:

Nevin:

The implementation detail of whether the current execution context gives up its thread during the call is irrelevant to the programmer.

It's extremely relevant, because of thread safety considerations.

Unless you're talking about the typical problems of reentrancy that occur even without threads, I don't think so. Giving up a thread means the same thread may start running some other task. It doesn't introduce a new opportunity for race conditions or deadlocks (or mangled memory).

No, I'm not talking about concurrency problems, but consider code running on the main thread in isolation.

When it's synchronous (i.e sequential and not giving up the thread), then the code is thread safe because nothing can be interleaved. The code has complete control of its data structures.

When it's asynchronous (i.e. sequential but gives up the thread at a suspension point), other code may be interleaved into the sequence. That code is not prevented from modifying the data structures that the suspended code was also modifying, potentially corrupting them.

adamkemp:

Since async/await tends to get used a lot for UI code, a common mistake people make is not understanding when control returns to the UI run loop, which may have the effect of making intermediate UI changes visible to the user. Knowing that every "await" in a function called from the UI thread is a temporary return to the run loop is helpful so that you know to carefully place code that changes the UI before or after that await depending on when you want those UI changes to be visible to the user.

If the call wasn't marked with "await" then you wouldn't even know to think about that, and you may have to just guess or keep checking the functions you're calling to remember whether they're async or not.

OK, now we're getting somewhere. That is indeed a typical problem for class-based UI programs. That leads to some obvious questions:

  • Should we be designing the language for this paradigm when better alternatives (e.g. SwiftUI) exist?
  • Can we limit the need for await marking to those cases where it typically matters (e.g. to instance methods of classes)?
  • How is realizing the full concurrency vision going to affect this, if at all? Is the evolution of OOP UI frameworks once we have full actor isolation likely to make it a non-issue?

QuinceyMorris:

No, I'm not talking about concurrency problems, but consider code running on the main thread in isolation.

I didn't say “concurrency problems;” I said the “typical problems of reentrancy,” which are exactly what you described (and which in fact are also concurrency problems when data structures are shared). These problems are distinct from thread safety.

But they're also strongly related to thread safety, so it's an understandable mixup. In fact, I view thread safety as a matter of preventing the observation of broken invariants, where even a non-atomic Int is considered to have its invariants temporarily broken while a thread is modifying it. So at its core, it ends up being the same issue.

dabrahams:

Should we be designing the language for this paradigm when better alternatives (e.g. SwiftUI) exist?

SwiftUI would, at best, only be shifting the problem though. It's no longer about the main "UI" runloop, but you still need to entertain the main "State" runloop. With so many independent pieces (users, networks, etc), we eventually ends up with synchronization somewhere. It'd be nice if we can push it all the way into OS so that it's not our problem, but SwiftUI hasn't gone that far yet.

That said, SwiftUI requires a lot less synchronization, so I'm not sure whether await would be on noise or signal side.

dabrahams:

Also, I'm not sure about “in parallel,” if we're distinguishing parallelism and concurrency.

I’m not an expert in the terminology here. I was attempting to distinguish between code that runs like this (with time progressing downward):

// do something
//      ↓
// call foo()
//      ↓
// do something
//      ↓
// use result of foo

And code that runs like this:

// do something
//      ↓
// call foo()  – – – – → foo does something
//      ↓                      ↓
// do something          foo continues doing things
//      ↓                      ↓
// use result of foo  ←  foo returns

The first example I would call “linear” or “single-path” or “synchronous”. If foo is a regular function, that’s the behavior you get. Also, if foo is asynchronous but called with await (as proposed), that’s also what you get. Things happen one after another.

The second example I would call “parallel” or “concurrent” or “asynchronous”. Things are happening at the same time in different functions.

QuinceyMorris:

The same issue: that it matters where the suspension points are. However you slice it, a small error in invariant preservation can be catastrophic.

So, I repeat, there's more risk in omitting async than (likely) in omitting try .

I repeat my request.

Could you please provide an example where the outcome would or could be different by calling and awaiting an async function, versus calling a synchronous function that does the same thing without suspending?

It's kinda hard to make a very short, plausible example, but here's some synchronous code:

func computeAdjustment(count: Int) -> Int {
    return 1
}

var myCount = 0

func testIntegrity() {
    DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
        myCount += 1
    }
    print(myCount + computeAdjustment(count: myCount))
}

This prints "1". Now make it asynchronous:

func computeAdjustment(count: Int) async -> Int {
    // assume there is something asynchronous here
    return 1
}

var myCount = 0

func testIntegrity() {
    DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
        myCount += 1
    }
    print(myCount + await computeAdjustment(count: myCount))
}

Now it may print "2" or "1", depending on whether the dispatched closure runs during the suspension, or after the function exits.

Hi Dave,

I think that you have a couple of good points, but are missing the bigger picture on this.

Try and await marking are key points of helping programmers understand control flow in their applications and cover for serious deficiencies in the C++ and Java exception handling model (where many programmers ignore exceptional control flow). This failure of the C++/Java model is one of the reasons that C++ exception safe programming model is such a binary (all or nothing) thing, and one of the reasons Java doesn't interop with C very well. We should do better here. Furthermore, the back-pressure on "exception handling" logic is intentional, and is one of the things that is intended to help reduce the number of pervasively throwing methods in APIs.

Your characterization of marking being a historical artifact (whose "ship has sailed") isn't really fair IMO: many of us are very happy with it for the majority case, and believe that the original Swift 2.0 design decisions have worked out well in practice. I also personally believe that async marking is a promising (but unproven) direction to eliminate a wide range of deadlock conditions in concurrent programs when applied to the actors model. I'm not aware of any other model that achieves the same thing.

I personally think that your proposal:

dabrahams:

Instead, I suggest using a keyword in lieu of throws to simultaneously acknowledge that a function throws, and declare that we don't care exactly where it throws from:

func encode(to encoder: Encoder) throws_anywhere {

is an overreach, and "over solution" to the problem. In addition to applying to the whole function, this approach makes it appear as though it is part of the API of the function, when it is really an artifact of the implementation details / body of the function.

That said, I agree that you're on to something here and I agree with you that async will exacerbate an existing issue. There are a couple of ways to address this. One is to reduce the keyword soup by introducing a keyword that combines try and await into one keyword (similarly throws and async) - but I am convinced that we should land the base async proposal and gain usage experience with it before layering on syntactic sugar.

That other side of this is the existing point that you're observing where we have existing use cases with try that are so unnecessarily verbose that they obfuscate the logic they contain. To me, I look towards solutions that locally scope the "suppress try" behavior that you're seeking: while you pick some big cases where it appears to be aligned with functions, often this is aligned with regions of functions that are implemented in terms of throwing (and also, in the future, async logic). That said, the whole scope of the declaration isn't necessarily implicated into this.

The solution to this seems pretty clear. We already have a scoped control flow statement that allows modifiers: the do statement. I think that we should extend it with try and await modifiers to provide the behavior you want.

Instead of your example:

func encode(to encoder: Encoder) throws_anywhere {
  var output = encoder.unkeyedContainer()
  output.encode(self.a) // no try needed!
  output.encode(self.b)
  output.encode(self.c)
}

We would instead provide:

func encode(to encoder: Encoder) throws {
  var output = encoder.unkeyedContainer()
  try do {
    output.encode(self.a) // no try needed!
    output.encode(self.b)
    output.encode(self.c)
  }
}

While this is slightly more verbose, it moves the weight into the implementation details where local scopes can be marked as implicitly trying, rather than it being at the granularity of entire decls. This seems to provide the same capability that you're looking for, but with a more composable approach that puts the burden on the implementation instead of the interface.

-Chris

QuinceyMorris:

Now it may print "2" or "1", depending on whether the dispatched closure runs during the suspension, or after the function exits.

Thanks.

In that example, the issue seems to arise from the async call breaking the order of operations within a single statement.

Instead of the left side of “+” being evaluated (myCount) and then the right side (calling computeAdjustment), the inclusion of await causes the entire print statement to be delayed until computeAdjustment completes.

This is essentially equivalent to moving the call up a line:

func testIntegrity() {
    DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
        myCount += 1
    }
    let x = await computeAdjustment(count: myCount)
    print(myCount + x)
}

In that reformulation, the sync and async versions will behave identically, printing 1 or 2 based on how long computeAdjustment takes to complete.

So the complexity arises not from anything inherent to suspension, but rather because a statement containing an async call gets transformed “as if” the async call happened on the previous line, thus upending the expected order of operations.

That seems like a significant “gotcha” that people will not expect.

Well, my example wasn't very good, because the outcome may depend on when the compiler loads the value of myCount for the expression, too. That part wasn't the point of the example.

However, your re-written version still always prints "1" when synchronous, never "2". It doesn't matter when the deadline expires (relative to the access to myCount), because the fact that the code is synchronous ensures that the closure won't execute until after the function has returned — and indeed after the thread has returned to main event loop.

In the synchronous version, the closure has no effect on the rest of the function. In the asynchronous version, it might.

QuinceyMorris:

However, your re-written version still always prints "1" when synchronous, never "2". It doesn't matter when the deadline expires (relative to the access to myCount ), because the fact that the code is synchronous ensures that the closure won't execute until after the function has returned — and indeed after the thread has returned to main event loop.

In the synchronous version, the closure has no effect on the rest of the function. In the asynchronous version, it might.

Well now I’m even more confused.

I thought await meant “await”.

Are you saying that the main event loop would continue to execute during the await?

(I’m assuming testIntegrity was called on the main thread. Was that your intention? I also notice it is not marked as async, but I assume it should be, right?)

Nevin:

A regular old bog-standard synchronous function could easily modify the state of the class instance from which it is called.

Wouldn't that just be a race?

Nevin:

Could you provide an example where the outcome would (or could) be different by calling and awaiting an async function, versus calling a synchronous function that does the same thing?

Async by itself doesn't do anything. I wouldn't be surprised if one struggles to come up with such an example. It's more useful with protected states (like actor) though.

Nevin:

Are you saying that the main event loop would continue to execute during the await ?

Yes. The await is a suspension point, meaning a place where this code gives up its thread for other code to use. "This code" means "the rest of this function and any functions that are awaiting this function's completion". "Other code" could be anything else.

Nevin:

(I’m assuming testIntegrity was called on the main thread. Was that your intention? I also notice it is not marked as async , but I assume it should be, right?)

Well, yes. However, if this is running on the main thread, something servicing the main event loop must have started synchronously, and there had to have been a transition to an initial asynchronous function, at which point the synchronous portion would return to the main event loop.

348_2.png Chris_Lattner3:

The solution to this seems pretty clear. We already have a scoped control flow statement that allows modifiers: the do statement. I think that we should extend it with try and async modifiers to provide the behavior you want.

This was the first thing that popped into my head as well, for the exact same reason, after reading Dave’s post. Except it would be await not async, right?

func processImageData() async throws -> Image {
  await try do {
    let dataResource  = loadWebResource("dataprofile.txt")
    let imageResource = loadWebResource("imagedata.dat")
    let imageTmp      = decodeImage(dataResource, imageResource)
    let imageResult   = dewarpAndCleanupImage(imageTmp)
    return imageResult
  }
}

8868_2.png Lantua:

Nevin:

A regular old bog-standard synchronous function could easily modify the state of the class instance from which it is called.

Wouldn't that just be a race?

I mean it could directly modify the instance. (Or call other code that does). No concurrency or suspension points needed.

QuinceyMorris:

Yes. The await is a suspension point, meaning a place where this code gives up its thread for other code to use . "This code" means "the rest of this function and any functions that are awaiting this function's completion". "Other code" could be anything else.

If the code is being run as part of the main event loop, doesn’t that make the main event loop an “ancestor” of this code (ie. awaiting it), and thus it also gets suspended?

Conversely, if this code is not being run as part of the main event loop, then it should be on a different thread, and the situation would be a regular old race regardless of sync or async.

QuinceyMorris:

Well, yes. However, if this is running on the main thread, something servicing the main event loop must have started synchronously, and there had to have been a transition to an initial asynchronous function, at which point the synchronous portion would return to the main event loop.

The async/await proposal says that @main could be async:

@main
struct MyProgram {
  static func main() async { ... }
}

That implies to me that the entire runloop would give up its thread when anything it calls suspends.

I’m somewhat unclear on which cases it will be okay to not include the try keyword and what it will mean. Are you saying some like similar to @discardableResult perhaps spelled as @discardableTry? In that case throwing is still a control flow construct. Or are you referring to a way to mark a scope so that is doesn’t control flow (exit early, sort of like break) and acts more like fallthrough but in that case are we capturing all errors or discarding them? I would love a couple more examples. Thank you!

348_2.png Chris_Lattner3:

Try and await marking are key points of helping programmers understand control flow in their applications and cover for serious deficiencies in the C++ and Java exception handling model (where many programmers ignore exceptional control flow)

That base is already covered by “throws.” The language prevents exceptions from being ignored by requiring “throws” on throwing functions. Either a function catches all the errors that may be thrown into its body, or it has to declare that it throws. You don't need try for that.

Overcompensating by forcing try to be in so many places really illustrates the missing of a point I've been making, that the exact control flow doesn't matter at all in so many cases: if there's no mutation, or if the mutation is only of local variables whose lifetime will end when the function throws, or if the mutation is only of local variables of some caller whose lifetime will end when it throws, or… the list goes on (see my original posting). In fact, the idea that you “can see the control flow” is an illusion in all these cases: what you think you're nailing down by making the control flow visible can easily be scrambled by an optimizer without observable effect on the program's meaning.

This failure of the C++/Java model is one of the reasons that C++ exception safe programming model is such a binary (all or nothing) thing,

I have no idea what you might mean by that, honestly.

…the back-pressure on "exception handling" logic is intentional, and is one of the things that is intended to help reduce the number of pervasively throwing methods in APIs.

By “back pressure” I think you are referring to the idea that writing try is odious and the theory that therefore people will try to avoid creating APIs that throw. The premise here is that the language should have this error-handling feature but somehow, at the same time, we need to make it painful because we want to discourage its use.

Well, I don't buy it as a language design strategy. First, it's punitive in a way that's inconsistent with the character of the rest of the language—thank goodness we haven't taken this kind of approach elsewhere, or Swift would be much less enjoyable to use. Second, I don't buy the idea that “oh but the caller will have to try, so I'd better not” ever enters the thought process of an API designer deciding whether or how to report an error. The one thing that will come up is, “the caller is almost sure to want to handle the failure right there, rather than reporting it up the chain”—typical for things like the lowest-level networking operations, which the caller is likely to retry. But again, that disincentive base is covered by the fact that the caller will have to catch, which involves more ceremony than simply checking for nil or looking at a result enum. Last of all, the thought of one try is simply not painful enough to exert any significant “back-pressure.” Remember, I'm not bringing this up because writing try is so horrible for the programmer, but because of what it does to the language, its source code base, and its community of users in aggregate, when it happens over and over in places where it can't make a difference.

Your characterization of marking being a historical artifact (whose "ship has sailed") isn't really fair IMO: many of us are very happy with it for the majority case, and believe that the original Swift 2.0 design decisions have worked out well in practice.

? I never said it was a historical artifact, and I don't understand how fairness comes into it. I do sincerely apologize if I've somehow offended, but when I say “that ship has sailed” I'm not saying anything about marking; I was talking about some of my earlier proposals. I'm merely saying it might have been viable to consider them once upon a time, but the language is too mature at this point to take such a significant turn.

I also personally believe that async marking is a promising (but unproven) direction to eliminate a wide range of deadlock conditions in concurrent programs when applied to the actors model. I'm not aware of any other model that achieves the same thing.

Interesting; I'd like to hear more about that in detail, if you don't mind. It does seem at odds with some of my understanding, though: AFAIK the proposers have not declared an intention to change actors from the unconditionally re-entrant model originally pitched, and IIUC that provably eliminates deadlocks.

I personally think that your proposal:

dabrahams:

Instead, I suggest using a keyword in lieu of throws to simultaneously acknowledge that a function throws, and declare that we don't care exactly where it throws from:

func encode(to encoder: Encoder) throws_anywhere {

is an overreach, and "over solution" to the problem. In addition to applying to the whole function, this approach makes it appear as though it is part of the API of the function, when it is really an artifact of the implementation details / body of the function.

You make a good point about the separation of API and implementation. I guess we have no other precedent for a choice like that, so it would be hard to justify.

That said, I agree that you're on to something here and I agree with you that async will exacerbate an existing issue. There are a couple of ways to address this. One is to reduce the keyword soup by introducing a keyword that combines try and await into one keyword (similarly throws and async ) - but I am convinced that we should land the base async proposal and gain usage experience with it before layering on syntactic sugar.

I don't think that scales. What happens when we add an impure effect (or whatever the next effect dimension is)?

That other side of this is the existing point that you're observing where we have existing use cases with try that are so unnecessarily verbose that they obfuscate the logic they contain. To me, I look towards solutions that locally scope the "suppress try " behavior that you're seeking: while you pick some big cases where it appears to be aligned with functions, often this is aligned with regions of functions that are implemented in terms of throwing (and also, in the future, async logic). That said, the whole scope of the declaration isn't necessarily implicated into this.

No, not necessarily, but that's been the problem with the design approach to try all along. Because there are occasional places where being alerted to the source of error propagation can be helpful, we've ignored the broad fact that in the vast majority of cases, it is irrelevant. And I maintain this is not good for programmers. If you look at what's happened with your rewritten encode example below, it gives the impression that it's somehow significant that no error can propagate from the first statement, but AFAICT there is no world in which that helps anyone think about the semantics of this function, any more than putting a non-throwing let inside the try do {...} block would make it worse. I respect your inclination to do something more tightly scoped and conservative, but I hope I've explained why I have the opposite inclination.

…instead provide:

func encode(to encoder: Encoder) throws {
  var output = encoder.unkeyedContainer()
  try do {
    output.encode(self.a) // no try needed!
    output.encode(self.b)
    output.encode(self.c)
  }
}

While this is slightly more verbose, it moves the weight into the implementation details where local scopes can be marked as implicitly try ing, rather than it being at the granularity of entire decls. This seems to provide the same capability that you're looking for, but with a more composable approach that puts the burden on the implementation instead of the interface.

OK, I appreciate your willingness to consider the possibilities. But let's compare that with the code you'd write today:

func encode(to encoder: Encoder) throws {
  var output = encoder.unkeyedContainer()
  try output.encode(self.a)
  try output.encode(self.b)
  try output.encode(self.c)
}

I think you'll agree the extra level of nesting in your example is a significant syntactic cost, which makes it hard to argue that there's much improvement.

But if you'll allow me to run with your idea, I think there are two things we can do to improve it and that will give us both what we want:

  1. Eliminate the need for do and allow try { ... } to mark an entire block as throwing.

    func encode(to encoder: Encoder) throws 
    {
      var output = encoder.unkeyedContainer()
      try {
        output.encode(self.a)
        output.encode(self.b)
        output.encode(self.c)
      }
    }
    
  2. Allow that at the top level of the function:

    func encode(to encoder: Encoder) throws 
    try {
      var output = encoder.unkeyedContainer()
      output.encode(self.a)
      output.encode(self.b)
      output.encode(self.c)
    }
    

Thanks for engaging,
Dave

I know this is just my personal experience, but I’ve found the try keyword immensely useful in Swift, especially when reading code, like during code review. Errors and error handling are difficult to get right. Having those places that can throw pointed out clearly forces the reader to think about error handling.

I can’t remember the number of times seeing that try when re-reading my code or reviewing someone else’s code triggered a conversations on error handling, and often resulting in improvements to the code.

Asynchronous code is hard to get right and having those places in the code that can cause a suspension point clearly visible seems like it would have a similar impact for me.

dabrahams:

That base is already covered by “throws.” The language prevents exceptions from being ignored by requiring “throws” on throwing functions.

But that's only information available if you look at the declaration and not the call site, as we tend to favor in Swift API design. The language assists us in knowing that method throws by requiring the try keyword.

As David Hart mentions, it's extremely useful in code reviews or returning to code not seen in awhile, that these keywords assist you in knowing that exceptions can happen and failure states need to be taken into account.

The language already has mechanisms in place to indicate that you probably don't care about the error by using try? or try! in many cases - and after you get used to it, it's like they're not there - but they are to still give an indication that there is an error state and it needs to be decided how to handle it.

738_2.png hartbit:

Errors and error handling are difficult to get right.

Yes; but think about why. Having to stop moving forward because a postcondition can't be satisfied (a.k.a. dealing with an error) is difficult because you can easily make temporary invariant breakage permanent. You have to make sure you undo the breakage or make it irrelevant. There are no other hard problems in error handling.

By scattering try everywhere that an error can propagate, instead of only where invariants are broken, we take the focus off what matters in analyzing the correctness of error handling code.

I can’t remember the number of times seeing that try when re-reading my code or reviewing someone else’s code triggered a conversations on error handling, and often resulting in improvements to the code.

Can you tell the story of one or two of these instances? Did you find a bug because of try that you wouldn't have equally found because of the throws on the function containing the try, or, if the containing function didn't throw, that you wouldn't have found because of a surrounding catch?

Wouldn't it be better if we had an easy way to identify the code where the error-handling didn't need extensive discussion in code review, and we could write that code with less ceremony?

Asynchronous code is hard to get right and having those places in the code that can cause a suspension point clearly visible seems like it would have a similar impact for me.

I understand that try and await might “feel good,” but I've seen whole programming communities accept ideas that “feel good” but whose costs in fact outweigh the benefits. I know personally I am gratified by encoding things in the type system and language, and I constantly have to guard against over-investing in mechanisms for rigor and structure when they don't actually pay off. I'm hoping that we're designing the language based on something more principled, here.

5840_2.png Mordil:

dabrahams:

That base is already covered by “throws.” The language prevents exceptions from being ignored by requiring “throws” on throwing functions.

But that's only information available if you look at the declaration and not the call site , as we tend to favor in Swift API design. The language assists us in knowing that method throws by requiring the try keyword.

As I've said, that base is covered. The call site requires either a catch, or a throws of its own.

The call site does not require throws, the declaration requires throws.

At that point I will have to refer to the greater "context" of the current method's declaration that the method can throw - but I still have no indication where in the method it throws without referring to every calling method's declaration.

func someThrowingFunc() throws {
  // some 100 lines later
  let result = otherMethod() // this call can throw
  // some N lines later
}

At this point, how would you know that this line throws, without knowing that method's declaration, and that your method throws?

How would a new Swift engineer know this, other than the compiler saying "your method has to be marked as throws because 'something' throws in it and isn't handled"

dabrahams:

  • Should we be designing the language for this paradigm when better alternatives (e.g. SwiftUI) exist?

Any UI framework will at some point have the issue of "a user just clicked a button, and the handler for that could take a long time, so how do we write clean code that has decent UX for that long-running operation?" One approach to that is something like Combine, but that can get extremely complicated very quickly. In fact, I would argue that many uses of Combine today for UI programming show how hard callback-based coding can get for relatively simple UI use cases.

Async/await provides a different way of accomplishing the same thing. With async/await, even with SwiftUI, you can have a button click handler that a) updates UI state to say "we're waiting on something", b) kicks off asynchronous work, c) receives the results of that asynchronous work, and then d) updates the UI again with the results. All of that can be in one short async function with ordinary-looking control flow, as opposed to a series of complicated callbacks chained together.

I firmly believe that SwiftUI programmers are going to love this feature just as much as UIKit/AppKit developers.

dabrahams:

  • Can we limit the need for await marking to those cases where it typically matters (e.g. to instance methods of classes)?

It matters everywhere you have a suspension point.

dabrahams:

  • How is realizing the full concurrency vision going to affect this, if at all? Is the evolution of OOP UI frameworks once we have full actor isolation likely to make it a non-issue?

I can't speak for those designing the actors feature, but personally I don't believe "actors for everything" is a practical goal, if for no other reason than we have to interact with other parts of the system that aren't written in Swift. As with SwiftUI, this is going to be a useful feature even for people adopting actors.

1007_2.png David_Catmull:

dabrahams:

The call site requires either a catch , or a throws of its own.

Maybe part of the disagreement here is varying definitions of "call site". Is it the actual line where a throwing function is called, or the enclosing do block, or the entire function?

All I'm saying is that it's rare that choosing one of these three ways to look at it makes any difference to your ability to understand the function. In most functions, it simply does not matter which particular statements throw. Furthermore, there's nothing special about the statement boundary that makes it the one special place where I must be forced to write try. With the right library parts, I can write arbitrarily complex functions as a single expression. With method chaining, it doesn't even need to be hard to read. At the limit, we end up with a single try at the beginning of the function.

adamkemp:

I firmly believe that SwiftUI programmers are going to love this feature just as much as UIKit/AppKit developers.

Oh, me too, absolutely! I hope I didn't give a different impression. I'm just talking about what the rules for the await marking should be.

dabrahams:

  • Can we limit the need for await marking to those cases where it typically matters (e.g. to instance methods of classes)?

It matters everywhere you have a suspension point.

To me that seems to be an obvious overstatement. If I'm doing non-realtime pure functional async programming, for example, the exact suspension points are an implementation detail, aren't they? If not, why not?

dabrahams:

  • How is realizing the full concurrency vision going to affect this, if at all? Is the evolution of OOP UI frameworks once we have full actor isolation likely to make it a non-issue?

I can't speak for those designing the actors feature, but personally I don't believe "actors for everything" is a practical goal

I think the proposers have made it pretty clear that it's a non-goal. But I still don't know how actor isolation is supposed to play out for UI programmers, or if it's going to have significant implications for the seriousness of this issue.

738_2.png hartbit:

I know this is just my personal experience, but I’ve found the try keyword immensely useful in Swift, especially when reading code

+1. I don't have time to engage in detailed discussion of this topic, but I find these control flow markers really, really useful to see when reading code.

dabrahams:

By scattering try everywhere that an error can propagate, instead of only where invariants are broken, we take the focus off what matters in analyzing the correctness of error handling code.

IMO the two are completely orthogonal issues. I think the importance of maintaining invariants should imply that marking expressions that may throw is not useful.

dabrahams:

If I'm doing non-realtime pure functional async programming, for example, the exact suspension points are an implementation detail, aren't they? If not, why not?

You got the answer to this already, so I don't understand why you're asking again.

Suspension points are places where broken invariants (to use your terminology) are made visible to (and susceptible to the interference of) the outside world. Conversely, we use synchronous code to safely corral invariant breakage (and susceptible code sequences) to avoid it being exposed to the outside world.

That's not an implementation detail.

dabrahams:

If I'm doing non-realtime pure functional async programming, for example, the exact suspension points are an implementation detail, aren't they?

In that case they may indicate places where you could consider doing additional work on the original thread or queuing up more asynchronous work while waiting for an asynchronous completion. There are examples of that in the structured concurrency proposal 1.

If the asynchronous calls (the suspension points) were not called out then it wouldn't be apparent to the person writing or reading the code how that function would behave at runtime from a performance perspective. Once you add in the obligatory await calls it may become obvious that the function is written in a way that is unnecessarily slow.

With async/await plus the structured concurrency proposal you can do relatively simple refactoring to deliberately interleave work and improve performance without having to introduce a lot of complexity. But if you take out the await keyword then it obscures what's actually happening and cause people to write bad code (either buggy or poorly performing) because they literally can't see how the code they're reading/writing actually behaves at runtime. That's why even though the compiler certainly could work without the await keyword it should still be mandatory at every suspension point. Our primary audience in writing code is for other humans (including ourselves), not the compiler.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK